You are on page 1of 463

Statistix 10

User’s Manual

Copyright Copyright © 1985-2013. Analytical Software. All rights reserved.


Notice
Statistix is a trademark of Analytical Software. Other brand and product names are
trademarks or registered trademarks of their respective holders.

License This license statement and limited warranty constitutes a legal agreement ("License
Statement Agreement") between you ("Licensee", either as an individual or a single entity)
and Analytical Software ("Licensor"), sole worldwide publisher of the software
product Statistix ("Software").

By installing, copying, or otherwise using the Software, you agree to be bound by


all of the terms and conditions of the license agreement.

Upon your acceptance of the terms and conditions of the License Agreement,
Licensor grants you the right to use the Software in the manner provided below.

The Licensor reserves the right to license the same Software to other individuals or
entities under a different license agreement.

If a single-user license was purchased, the Licensee may install the Software in one
of two possible ways. (1) the Licensee has the option of creating a single
installation of the Software possibly used by one or more persons, or (2) creating
multiple installations of the Software used exclusively by a single person. Any
combination of these options, or installing the Software on a network server, is not
permitted.

If a network license was purchased, the Licensee may install the Software in one of
three possible ways. (1) The Licensee may install the Software on a single network
server on the condition that the number of client computers that have access to the
Software does not exceed the number specified by the network license; or (2) the
Licensee may install the Software on a single network server with any number of
client computers, on the condition that metering software is used to limit the
number of simultaneous client computers access to the Software does not exceed
the number specified by the network license; or (3) the Licensee may install the
Software on a single network server with any number of client computers, on the
condition that a good faith estimate of the total number people who will access the
Software during any calendar year does not exceed the number specified by the
network license.

Regardless of the number of licenses purchased, the Licensee has right to place an
additional copy of the Software on a removable medium for backup purposes only.

You must not attempt to reverse compile, modify, translate or disassemble the
Software in whole or in part.

The Software remains the exclusive property of the Licensor. Any Licensee who
fully complies with the terms in this license agreement may use it according to the
terms of this license agreement. You must not give copies of the Software to other
persons or entities. You must not transfer the Software to another person or entity.

The Licensor reserves the right to revoke your license if you violate any or all of
the terms of this license agreement, without prior notice.

Limited Analytical Software warrants the physical CD enclosed herein to be free of defects
Warranty in materials and workmanship for a period of 90 days from the purchase date. The
entire and exclusive liability and remedy for breach of this Limited Warranty shall
be limited to replacement of a defective CD and shall not include or extend to any
claim for or right to recover any other damages, including but not limited to, loss
of profit, data, or use of the software, or special, incidental, or consequential
damages or other similar claims. In no event shall Analytical Software’s liability
for any damages to you or any other person ever exceed the lower of suggested list
price or actual price paid for the license to use the software.

Analytical Software specifically disclaims all other warranties, express or implied,


including, but not limited to, any implied warranty of merchantability or fitness for
a particular purpose. The user assumes the entire risk as to the quality,
performance, and accuracy of this product. Analytical Software shall not be held
responsible for any damages that may result from the use of this product, whether
through Analytical Software’s negligence or not.

System Statistix runs on Windows based personal computers. It requires Windows XP or a


Requirements later. Both 32-bit and 64-bit versions of Windows 7 and Windows 8 are supported.
A minimum of 2 GB of RAM is recommended.

Technical Should you need help using Statistix, call our office for free technical support at
Support 850-893-9371 Monday-Friday, 9:00 a.m.-5:00 p.m. Eastern Time. You can fax us
at 850-894-1134 or send email to support@statistix.com anytime.
C O N T E N T S

Chapter 1 Introduction. . . . . . . . . . . . . . . . . . . 7 Transpose. . . . . . . . . . . . . . . . . . . . . . . . . . 39


Using This Manual.. . . . . . . . . . . . . . . . . . . 7 Gram-Schmidt. . . . . . . . . . . . . . . . . . . . . . 40
Installing Statistix. . . . . . . . . . . . . . . . . . . . 8 Omit/Select/Restore Cases.. . . . . . . . . . . . 41
The Statistix Menus. . . . . . . . . . . . . . . . . . . 8 Sort Cases.. . . . . . . . . . . . . . . . . . . . . . . . . 43
Pages and Tabs.. . . . . . . . . . . . . . . . . . . . . . 9 Rename Variables. . . . . . . . . . . . . . . . . . . 44
The Statistix Toolbar. . . . . . . . . . . . . . . . . 10 Reorder Variables. . . . . . . . . . . . . . . . . . . 45
Data In Statistix. . . . . . . . . . . . . . . . . . . . . 12 Column Formats.. . . . . . . . . . . . . . . . . . . . 45
Variables and Variable Names.. . . . . . 12 Labels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Data Types. . . . . . . . . . . . . . . . . . . . . . 13 Data Set Label. . . . . . . . . . . . . . . . . . . 47
Cases and Data Subsets. . . . . . . . . . . . 13 Variable Labels. . . . . . . . . . . . . . . . . . 47
Data Set Size. . . . . . . . . . . . . . . . . . . . 13 Value Labels. . . . . . . . . . . . . . . . . . . . 48
Missing Values.. . . . . . . . . . . . . . . . . . 14 Arithmetic and Logical Expressions. . . . . 49
Getting Data In and Out of Statistix.. . . . . 14 Arithmetic Expressions. . . . . . . . . . . . 49
Statistix Dialog Boxes. . . . . . . . . . . . . . . . 15 Date and String Arithmetic. . . . . . . . . 50
Variable Name Selection. . . . . . . . . . . 16 Logical (Boolean) Expressions. . . . . . 50
Saving Dialog Boxes. . . . . . . . . . . . . . 17 Machine Precision and Tests for
Results Page. . . . . . . . . . . . . . . . . . . . . . . . 17 Equality. . . . . . . . . . . . . . . . . . . . . 52
Settings.. . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Handling of Missing Values. . . . . . . . 52
General Settings. . . . . . . . . . . . . . . . . . 18 Built-in Functions. . . . . . . . . . . . . . . . 53
Spreadsheet Font. . . . . . . . . . . . . . . . . 19
Report Font. . . . . . . . . . . . . . . . . . . . . 19 Chapter 3 File Menu. . . . . . . . . . . . . . . . . . . . 61
Plot Colors. . . . . . . . . . . . . . . . . . . . . . 19 New. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Plot Fonts. . . . . . . . . . . . . . . . . . . . . . . 20 Open.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Plot Symbols / Line Patterns. . . . . . . . 20 Reopen. . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Sample Data. . . . . . . . . . . . . . . . . . . . . . . . 63
Chapter 2 Data Menu. . . . . . . . . . . . . . . . . . . 22 Save. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Insert Variables. . . . . . . . . . . . . . . . . . . . . 27 Save As. . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Insert Cases. . . . . . . . . . . . . . . . . . . . . . . . 28 Merge Cases.. . . . . . . . . . . . . . . . . . . . . . . 66
Delete Variables.. . . . . . . . . . . . . . . . . . . . 29 Merge Variables.. . . . . . . . . . . . . . . . . . . . 66
Delete Cases.. . . . . . . . . . . . . . . . . . . . . . . 29 Merge Labels, Transformations, Etc... . . . 67
Delete Omitted Cases.. . . . . . . . . . . . . . . . 30 Summary File.. . . . . . . . . . . . . . . . . . . . . . 68
Delete Highlighted Cells. . . . . . . . . . . . . . 30 Log File. . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Fill. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Import Excel.. . . . . . . . . . . . . . . . . . . . . . . 72
Transformations. . . . . . . . . . . . . . . . . . . . . 31 Import Text File. . . . . . . . . . . . . . . . . . . . . 74
Simple Assignment. . . . . . . . . . . . . . . 32 Tab-Separated Files. . . . . . . . . . . . . . . 75
Conditional Assignment.. . . . . . . . . . . 33 Comma and Quote Files.. . . . . . . . . . . 75
Converting Variable Types. . . . . . . . . 34 Format Statement. . . . . . . . . . . . . . . . . 75
Missing Values.. . . . . . . . . . . . . . . . . . 34 Importing a Single Variable. . . . . . . . . 77
Omitted Cases. . . . . . . . . . . . . . . . . . . 35 Comment Lines. . . . . . . . . . . . . . . . . . 77
Recode. . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Export Excel File. . . . . . . . . . . . . . . . . . . . 78
Indicator Variables.. . . . . . . . . . . . . . . . . . 36 Export Text File. . . . . . . . . . . . . . . . . . . . . 78
Stack.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 View Text File. . . . . . . . . . . . . . . . . . . . . . 81
Unstack. . . . . . . . . . . . . . . . . . . . . . . . . . . 38 View Graph File.. . . . . . . . . . . . . . . . . . . . 82
View Excel File. . . . . . . . . . . . . . . . . . . . . 83 One Correlation. . . . . . . . . . . . . . . . . . . . 161
View Statistix File. . . . . . . . . . . . . . . . . . . 84 Two Correlations. . . . . . . . . . . . . . . . . . . 162
File Info. . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Print. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Chapter 8 Linear Models.. . . . . . . . . . . . . . . 163
Exit Statistix.. . . . . . . . . . . . . . . . . . . . . . . 89 Correlations (Pearson). . . . . . . . . . . . . . . 164
Partial Correlations. . . . . . . . . . . . . . . . . 166
Chapter 4 Summary Statistics. . . . . . . . . . . . . 90 Variance-Covariance. . . . . . . . . . . . . . . . 167
Descriptive Statistics. . . . . . . . . . . . . . . . . 92 Linear Regression. . . . . . . . . . . . . . . . . . 169
Frequency Distribution. . . . . . . . . . . . . . . 93 Comparison of Regression Lines. . . . 171
Survey Frequency Distribution. . . . . . . . . 96 Durbin-Watson Test.. . . . . . . . . . . . . 172
Cross Tabulation. . . . . . . . . . . . . . . . . . . . 97 Prediction. . . . . . . . . . . . . . . . . . . . . . 173
Breakdown. . . . . . . . . . . . . . . . . . . . . . . . . 99 Plots. . . . . . . . . . . . . . . . . . . . . . . . . . 175
Stem and Leaf Plot.. . . . . . . . . . . . . . . . . 101 Save Residuals. . . . . . . . . . . . . . . . . . 176
Percentiles. . . . . . . . . . . . . . . . . . . . . . . . 103 Sensitivity. . . . . . . . . . . . . . . . . . . . . 180
Stepwise AOV Table. . . . . . . . . . . . . 182
Chapter 5 Summary Plots. . . . . . . . . . . . . . . 104 Variance-Covariance of Betas. . . . . . 183
Histogram.. . . . . . . . . . . . . . . . . . . . . . . . 105 Miscellaneous Regression Topics. . . 183
Cumulative Distribution Plot. . . . . . . . . . 107 Best Model Selection. . . . . . . . . . . . . 184
Dot Plot. . . . . . . . . . . . . . . . . . . . . . . . . . 109 Best Subset Regressions. . . . . . . . . . . . . 187
Error Bar Chart.. . . . . . . . . . . . . . . . . . . . 112 Stepwise Linear Regression. . . . . . . . . . . 189
Pie Chart.. . . . . . . . . . . . . . . . . . . . . . . . . 114 Logistic Regression. . . . . . . . . . . . . . . . . 193
Box and Whisker Plot. . . . . . . . . . . . . . . 115 Classification Table. . . . . . . . . . . . . . 197
Quantile Plot. . . . . . . . . . . . . . . . . . . . . . 117 Hosmer-Lemeshow Statistic. . . . . . . 197
Q-Q Plot. . . . . . . . . . . . . . . . . . . . . . . . . . 119 Odds Ratios. . . . . . . . . . . . . . . . . . . . 198
Scatter Plot. . . . . . . . . . . . . . . . . . . . . . . . 122 Stepwise Logistic Regression. . . . . . . . . 200
Scatter Plot Matrix. . . . . . . . . . . . . . . . . . 124 Poisson Regression. . . . . . . . . . . . . . . . . 202
Coplot.. . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Additional Background on
Logistic and Poisson Regression. . . . 207
Chapter 6 One, Two, & Multi-Sample Tests.129 Two Stage Least Squares Regression. . . 211
One-Sample T Test. . . . . . . . . . . . . . . . . 130 Eigenvalues-Principal Components. . . . . 214
Paired T Test. . . . . . . . . . . . . . . . . . . . . . 131
Sign Test. . . . . . . . . . . . . . . . . . . . . . . . . 132 Chapter 9 Analysis of Variance. . . . . . . . . . 218
Wilcoxon Signed Rank Test. . . . . . . . . . 134 Completely Randomized Design. . . . . . . 219
Two-Sample T Test. . . . . . . . . . . . . . . . . 136 Randomized Complete Block Design. . . 221
Wilcoxon Rank Sum Test. . . . . . . . . . . . 138 Latin Square Design.. . . . . . . . . . . . . . . . 224
Median Test. . . . . . . . . . . . . . . . . . . . . . . 140 Balanced Lattice Design. . . . . . . . . . . . . 226
One-Way AOV. . . . . . . . . . . . . . . . . . . . 142 Factorial Design. . . . . . . . . . . . . . . . . . . . 229
Kruskal-Wallis One-Way AOV. . . . . . . . 147 Split-Plot Design. . . . . . . . . . . . . . . . . . . 232
Friedman Two-Way AOV. . . . . . . . . . . . 151 Strip-Plot Design. . . . . . . . . . . . . . . . . . . 234
Split-Split-Plot Design.. . . . . . . . . . . . . . 236
Chapter 7 Hypothesis Tests.. . . . . . . . . . . . . 155 Strip-Split-Plot Design.. . . . . . . . . . . . . . 237
One Proportion.. . . . . . . . . . . . . . . . . . . . 155 Repeated Measures Design. . . . . . . . . . . 238
Two Proportions.. . . . . . . . . . . . . . . . . . . 156 General AOV/AOCV.. . . . . . . . . . . . . . . 243
One Mean.. . . . . . . . . . . . . . . . . . . . . . . . 157 AOV Results Menu. . . . . . . . . . . . . . . . . 249
Two Means.. . . . . . . . . . . . . . . . . . . . . . . 158 Means and Standard Errors. . . . . . . . . . . 249
One Variance. . . . . . . . . . . . . . . . . . . . . . 159 All-pairwise Multiple Comparisons.. . . . 251
Two Variances. . . . . . . . . . . . . . . . . . . . . 160 Multiple Comparisons with a Control. . . 256
Multiple Comparisons with the Best. . . . 258 Cross Correlation. . . . . . . . . . . . . . . . . . . 336
Simple Effects. . . . . . . . . . . . . . . . . . . . . 259 Moving Averages.. . . . . . . . . . . . . . . . . . 337
Contrasts. . . . . . . . . . . . . . . . . . . . . . . . . 261 Forecast Table. . . . . . . . . . . . . . . . . . 339
Polynomial Contrasts. . . . . . . . . . . . . . . . 264 Save Residuals. . . . . . . . . . . . . . . . . . 340
Plots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Exponential Smoothing. . . . . . . . . . . . . . 340
Save Residuals. . . . . . . . . . . . . . . . . . . . . 267 Forecast Table. . . . . . . . . . . . . . . . . . 342
Forecast Plot.. . . . . . . . . . . . . . . . . . . 343
Chapter 10 Nonlinear Models.. . . . . . . . . . . 269 Save Residuals. . . . . . . . . . . . . . . . . . 343
Spline Interpolation. . . . . . . . . . . . . . . . . 269 SARIMA. . . . . . . . . . . . . . . . . . . . . . . . . 344
Polynomial Regression. . . . . . . . . . . . . . 271 Forecast Table. . . . . . . . . . . . . . . . . . 348
Prediction. . . . . . . . . . . . . . . . . . . . . . 275 Forecast Plot.. . . . . . . . . . . . . . . . . . . 349
Plots. . . . . . . . . . . . . . . . . . . . . . . . . . 276 Save Residuals. . . . . . . . . . . . . . . . . . 350
Save Residuals. . . . . . . . . . . . . . . . . . 278 Variance-Covariance Matrix. . . . . . . 351
Stepwise AOV Table. . . . . . . . . . . . . 278
Variance-Covariance of Betas. . . . . . 279 Chapter 14 Quality Control.. . . . . . . . . . . . . 352
Nonlinear Regression.. . . . . . . . . . . . . . . 279 Pareto Chart. . . . . . . . . . . . . . . . . . . . . . . 353
Minimization History.. . . . . . . . . . . . 286 P Chart. . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Prediction. . . . . . . . . . . . . . . . . . . . . . 287 Np Chart.. . . . . . . . . . . . . . . . . . . . . . . . . 356
Plots. . . . . . . . . . . . . . . . . . . . . . . . . . 287 C Chart. . . . . . . . . . . . . . . . . . . . . . . . . . . 358
Save Residuals. . . . . . . . . . . . . . . . . . 290 U Chart.. . . . . . . . . . . . . . . . . . . . . . . . . . 360
Variance-Covariance of Betas. . . . . . 291 X Bar Chart. . . . . . . . . . . . . . . . . . . . . . . 362
Loess. . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 R Chart. . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Analysis of Variance. . . . . . . . . . . . . 298 S Chart. . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Prediction. . . . . . . . . . . . . . . . . . . . . . 299 I Chart. . . . . . . . . . . . . . . . . . . . . . . . . . . 368
Plots. . . . . . . . . . . . . . . . . . . . . . . . . . 300 MR Chart. . . . . . . . . . . . . . . . . . . . . . . . . 370
Save Residuals. . . . . . . . . . . . . . . . . . 304 EWMA Chart. . . . . . . . . . . . . . . . . . . . . . 372
Tolerance Intervals.. . . . . . . . . . . . . . . . . 374
Chapter 11 Association Tests. . . . . . . . . . . . 305
Multinomial Test. . . . . . . . . . . . . . . . . . . 306 Chapter 15 Survival Analysis. . . . . . . . . . . . 376
Chi-Square Test. . . . . . . . . . . . . . . . . . . . 307 Kaplan-Meier. . . . . . . . . . . . . . . . . . . . . . 377
Kolmogorov-Smirnov Test. . . . . . . . . . . 311 Two-Sample Survival Tests.. . . . . . . . . . 381
McNemar’s Symmetry Test. . . . . . . . . . . 313 Multi-Sample Survival Tests. . . . . . . . . . 383
Two By Two Tables.. . . . . . . . . . . . . . . . 316 Mantel-Haenzel Test. . . . . . . . . . . . . . . . 385
Log-Linear Models.. . . . . . . . . . . . . . . . . 318 Proportional Hazards Regression.. . . . . . 387
Specifying Starting Values.. . . . . . . . 321
Save Residuals. . . . . . . . . . . . . . . . . . 321 Chapter 16 Power Analysis.. . . . . . . . . . . . . 390
Spearman Rank Correlations. . . . . . . . . . 322 One-Sample/Paired T Test. . . . . . . . . . . . 391
Two-Sample T Test. . . . . . . . . . . . . . . . . 395
Chapter 12 Randomness/Normality Tests.. . 325 Analysis of Variance. . . . . . . . . . . . . . . . 398
Runs Test. . . . . . . . . . . . . . . . . . . . . . . . . 325 Power of the F Test. . . . . . . . . . . . . . 398
Shapiro-Wilk Normality Test.. . . . . . . . . 327 Power of Multiple Comparisons. . . . 401
Normal Probability Plot. . . . . . . . . . . . . . 328 One-Way AOV. . . . . . . . . . . . . . . . . 402
Randomized Complete Block Design.405
Chapter 13 Time Series.. . . . . . . . . . . . . . . . 330 Latin Square Design.. . . . . . . . . . . . . 406
Time Series Plot.. . . . . . . . . . . . . . . . . . . 331 Factorial Design. . . . . . . . . . . . . . . . . 408
Autocorrelation. . . . . . . . . . . . . . . . . . . . 333 Factorial in Blocks. . . . . . . . . . . . . . . 410
Partial Autocorrelation.. . . . . . . . . . . . . . 335 Split-Plot Design. . . . . . . . . . . . . . . . 411
Strip-Plot Design. . . . . . . . . . . . . . . . 414
Split-Split-Plot Design.. . . . . . . . . . . 417
Repeated Measures, One-Way. . . . . . 420
Repeated Measures, Two-Way. . . . . 422
One Proportion.. . . . . . . . . . . . . . . . . . . . 425
Two Proportions.. . . . . . . . . . . . . . . . . . . 428
One Correlation. . . . . . . . . . . . . . . . . . . . 429
Two Correlations. . . . . . . . . . . . . . . . . . . 431
Linear Regression (β).. . . . . . . . . . . . . . . 432
Linear Regression (R2). . . . . . . . . . . . . . . 434
Logistic Regression. . . . . . . . . . . . . . . . . 436

Chapter 17 Probability Functions. . . . . . . . . 441


Beta Probability Distribution. . . . . . . . . . 442
Chi-Square Probability Distribution. . . . 443
Inverse of the Chi-Square Distribution. . 443
Correlation Coefficient Distribution. . . . 443
F Probability Distribution. . . . . . . . . . . . 443
Inverse of the F-Distribution. . . . . . . . . . 444
Hypergeometric Probability Distribution.444
Negative Binomial Probability Dist.. . . . 444
Poisson Probability Distribution. . . . . . . 444
Student’s T-Distribution. . . . . . . . . . . . . 445
Inverse of the Student’s T-Distribution. . 445
Standard Normal Distribution. . . . . . . . . 445
Inverse of the Standard Normal Dist... . . 445

References. . . . . . . . . . . . . . . . . . . . . . . . . . . 447

Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
C H A P T E R

1
Introduction
Statistix is a statistical analysis program. It’s used to analyze experimental data by
applying a variety of statistical tests and procedures, such as t-tests, regression, and
analysis of variance.

The emphasis in the design of Statistix has always been to make it quick and easy
to obtain concise reports that answer your data analysis questions. Whether you are
a professional statistician or a researcher with your own data to analyze, whether
you do data analysis every day or only occasionally, you’ll find that Statistix will
quickly and easily help you find the answers you’re looking for.

Statistix is fast, compact, and accurate. The user’s manual is clear and concise,
complete with examples and references.

Fans of Statistix love it because it’s intuitive and easy to use. Statistix encourages
the kind of creativity that marks the difference between good and routine data
analysis. By its very design, Statistix invites you to be adventurous—to explore,
play around, and get to know your data.

Using This Manual


If you’re like most people, you’ll run the software before reading the manual.
That’s OK with us because Statistix is far easier to use than it is to read about.
Explore the Statistix menus and play around with the Statistix dialog boxes used to
run statistical procedures. You can open one of the sample data files supplied with
Statistix (Sample Data\Cholesterol.sx is a good one to start with) or enter a small
data set of your own. To open a sample data file, click on File on the main menu,
then select Sample Data. To create your own data set, click on Data on the main
menu, then select Insert, then select Variables.

This manual is organized along the lines of the program’s main menu. Chapter 2
covers the procedures on the File menu: Open, Save, Import, etc. Chapter 3 covers
the Data menu: Insert, Transformations, Recode, etc. Each of the remaining
chapters is devoted to one of the submenus of the Statistics menu: Summary
Statistics, Summary Plots, etc.

This manual is a PDF file accessed using Adobe Reader, a free program available

Chapter 1, Introduction 7
for download at www.adobe.com. You can open this manual using the Windows
start menu: click on Start, All Programs, Statistix, Statistix 10 User’s Manual. You
can also open it from inside Statistix: Click on Help, Statistix User’s Manual.

Reading the manual using Adobe Reader has advantages over reading a paper
copy. The bookmarks panel that appears on the left side displays the table of
contents for easy navigation. Click on a plus sign next to a chapter heading to
expand that chapter. Click on a heading or subheading to navigate to that chapter
or section of the manual.

The manual contains hyperlinks to other portions of the manual. The hyperlinks
are highlighted and underlined. Click on a hyperlink to navigate to that topic. You
can also perform searches for arbitrary text in the manual.

Installing Statistix
The Statistix software comes on one CD. You can’t run Statistix directly from the
distribution CD. You must install Statistix on a fixed disk on your computer. You
must be running Windows XP or later to run the Statistix installation program.

Insert the Statistix CD into your computer. On most computers, the installation
program will start automatically. If it doesn’t, use the Run command to start the
installation program named setup.exe. Follow the on-screen instructions to
complete the installation. A Statistix folder is created to store the program files
(usually C:\Program Files (x86)\Statistix 10) and a Statistix folder is created on the
All Programs folder of your Windows Start menu. To run Statistix, click on the
Statistix icon.

The Statistix Menus


When you first run Statistix, an empty spreadsheet is displayed and the main menu
appears above it. All of the items on the main menu are themselves menus. These
menus offer a variety of data management and statistical procedures. An example
empty spreadsheet and menu appears on the next page.

To select a pull-down menu, use your mouse to point and click on the name of the
pull-down menu. You can also select a pull-down menu by holding down the Alt
key and pressing the character underlined in the menu name (e.g., Alt-F for File).
Point and click to select a menu item from the pull-down menu or press the letter
underlined in the name.

8 Statistix 10 User’s Manual


The File menu includes procedures to open and save Statistix data files, and to
import data from other programs. The Data menu offers a number of procedures
to manipulate the spreadsheet data including a powerful
Transformations procedure. The Statistics menu lists several topic menus as
shown below offering both basic and advanced statistical analyses.

Before attempting to use a statistical procedure, you must either create a data set
using the Insert Variables procedure on the Data menu, or open a data file using
either the Open or Import procedure on the File menu.

In addition to the statistics menus listed above, many of the statistical procedures
display results menus after the initial analysis is specified and computed offering
additional analyses. Access a results menu by clicking on the shovel icon (think
dig deeper) located on the toolbar.

Pages and Tabs


Data, reports, and plots are displayed in separate pages inside the main Statistix
window. Only one page is displayed at a time. Each page has a tab located at the
top with text that identifies the contents of the page.

Chapter 1, Introduction 9
Each time you produce a report or plot using a statistical procedure, and new page
is added. To view a different page, click on the tab for that page. To close a page,
click on the X located on the tab. Closing the data spreadsheet tab closes the open
file leaving you with an empty spreadsheet.

The example on the preceding page has three pages: Cholesterol.sx containing the
spreadsheet of data, Descriptive Statistics containing a report, and Scatter Plot
containing a plot. The spreadsheet page is always controlled with the leftmost tab.
In this manual, we refer to the page that’s visible as the “active page”. The
functions on the toolbar, discussed below, always operate on the active page. For
example, if you click on the Print icon in the toolbar, you’ll print the contents of
the active page, which may be the data, a report, or a plot.

The Statistix Toolbar


The Statistix toolbar appears below the main menu.

Clicking on an icon gives you quick access to commonly used functions.

10 Statistix 10 User’s Manual


Click on this icon to open a Statistix data file previously created using the Save,
Save As, or Summary File procedure. The file data set becomes the active
Statistix data set and is displayed on the spreadsheet. Clicking on the down-arrow
to the right of the open icon displays a list of recently used files. Selecting one of
the files from the list opens it.

Click on this icon to save the contents of the active page. The active page may be
the data spreadsheet, a report, or a plot. If the data tab is active and your data set is
untitled when you use this function, you’ll be prompted to supply a file name. If
your data set already has a file name, then that file is updated with the current state
of your data.

Reports can be saved as rich text files (extension .rtf) which preserves the
formatting information, or as plain text files (extension .txt).

Plots can be saved using six different graphics file formats: Enhanced metafile,
Windows metafile, BMP, JPEG, GIF, and PNG. The enhanced metafile format
usually provides the best results.

Click on this icon to print the contents of the active page. The active page may be
the data spreadsheet, a report, or a plot. This is the same as selecting Print from
the File menu, and is discussed in detail on page 85.

Click on one of these icons to cut, copy, or paste data to or from the Windows
clipboard. Each function operates on the active page, which may be the data
spreadsheet, a report, or a plot. These functions are also accessed via the Edit
menu, and are discussed in detail on page 25.

Click on this icon to return to the dialog box used to create the report or plot in the
active page. For example, if the active page contains a histogram, click on this
icon to return to the histogram dialog box. The settings you selected to create the
plot are displayed, and you change some of the settings to alter the plot.

This is different from accessing the Histogram procedure via the Statistics menu.
If you were to do so, you’d create a new histogram plot in a new tabbed page.

Using this icon is the same as selecting Options from the results menu in Statistix
9.

Click on this icon to access a “results menu” of further analysis options for those
statistical procedures that offer them. It operates on the report or plot in the active
page. For example, suppose to used the Linear Regression procedure and the
resulting report is displayed in the active page. Clicking the shovel icon (think
“dig deeper”) displays a menu offering additional reports and plots for the same
regression model: fitted-line plot, prediction, save residuals, etc.

Chapter 1, Introduction 11
Clicking on this icon displays a menu for program settings. The settings
procedures are used to configure Statistix to your taste. You can select variable
order, date format, fonts, plot colors, and more. Once you’ve made your
selections, they remain in effect until you change them again. Refer to page 17 for
more about settings.

The procedures on the Analysis of Variance menu allow you to analyze more than
one dependent variable at a time. When you select a plot (say, a means plot or
residual plot) from the results menu, a series of plots are generated, one plot for
each dependent variable. Click on the right arrow to display the next plot in the
series. Click on the left arrow to redisplay the previous plot in the series.

Click on this icon to view Statistix help. You can browse help, look up a topic in
the index, or perform keyword searches.

Data In Statistix
In this section, we give an overview of how data are handled once entered into
Statistix. The following section describes ways to get your data into Statistix.
More details about data handling are given in Chapter 2.

Variables and Data in Statistix can be viewed as a rectangular table of values. The columns of
Variable the data table are called variables, and the rows are called cases. All data in
Names Statistix are referenced by variable names that you assign when data are entered. A
variable name is one to nine characters in length, must begin with a letter, and can
only consist of letters (including international Unicode letters), digits, and the
underscore character. You should assign meaningful variable names to help you
remember what they represent. There are a few words reserved for other tasks,
such as Case, M, and Pi, that you cannot use as variable names.

Variable names are used to manipulate the data. For example, a new variable
Volume can be created from the variables Height and Base using the
Transformations procedure as follows:
Volume = Pi ( Height ( Sqr(Base)

Pi and Sqr are examples of built-in functions available in Transformations, which


we’ll discuss in detail in Chapter 2.

Variable names are used to specify the source of data for statistical analyses. For
example, to specify the regression of Heat on Chem1 and Chem2 using the Linear
Regression procedure, select the name Heat for the dependent variable. Then
select the names Chem1 and Chem2 for the independent variables.

12 Statistix 10 User’s Manual


Data Types Statistix can handle five types of data: real, integer, date, ANSI string, and Unicode
string. A variable can only contain values of one data type. The data type of a
variable is established when you create the variable and can be changed to a
different data type using the Transformations procedure.

The “real” data type is used to represent floating point numbers (e.g., 1.245). This
format is the most flexible offered by Statistix and is used as the default data type
when creating new variables.

Integer data in Statistix are whole numbers in the range !32767 to 32767. This
data type uses less space as the real data type. You can use the integer data type
instead of the real data type, when appropriate, to increase the data set capacity of
Statistix. This will also save disk space by reducing the size of Statistix data files.

The “date” data type is used to represent dates (e.g., 12/31/1992). See General
Settings on page 18 for an option to select the order of month, day, and year.

String data types are used to enter alphanumeric data, such as a subject’s name.
String variables can be used as grouping variables for statistical procedures that
compute results by group. There are two string types available: ANSI string and
Unicode string. ANSI strings are limited to 200 or so ANSI characters, which
include letters and symbols commonly used in the English language. Unicode
strings support international characters and symbols for most written languages.

Cases and The rows in the rectangular data table are called cases. The cases are numbered
Data Subsets sequentially. The case numbers are listed on the left side of the spreadsheet page.
The Case function in Transformations and Omit/Select/Restore Cases provides a
method to refer to the case numbers.

Sometimes you’ll want to temporarily work with a subset of all data cases. The
Omit/Select/Restore Cases procedure can be used to “hide” specified cases from
the system. The subset selection is based on a condition that you specify.
Omit If (Height > 5) And (Weight < 100)

Until specified otherwise, Statistix only “sees” cases not omitted using the omit
statement. The cases are not deleted, but hidden. You can easily restore the
hidden cases anytime. Further details on Omit/Select/Restore Cases are given in
Chapter 2.

Data Set Size A Statistix data set is limited to 1000 variables and 1,000,000 cases. There’s a
maximum file size of 128 MB, which translates to a spreadsheet with about 16
million cells. Variables and cases compete for space in the data table. The more
cases you have in a data set, the fewer variables you can add.

Chapter 1, Introduction 13
Missing When data are entered into Statistix a value of “M” is used to show a missing
Values value. When data are displayed, a missing value is displayed as an “M”. The M
function available in Transformations and Omit/Select/Restore Cases is used to
assign missing values and to make comparisons to missing values.

All Statistix procedures examine the data for missing values and treat them
appropriately. If arithmetic is performed on a variable containing missing values
using Transformations (e.g., A = B + C), the result of the equation will be missing
for the cases that contain the missing values. When a statistic, such as the mean of
a variable, is calculated, only the non-missing values for the column are used to
compute the result. The Linear Regression procedure will drop cases that contain a
missing value for either the dependent variable or any of the independent variables.

Getting Data In and Out of Statistix


You can use three methods to enter data into Statistix
1) Keyboard data entry
2) Text or Excel files
3) Statistix data files.

Keyboard data entry is performed directly on the spreadsheet page. You create
new variables using the Insert Variables procedure found on the Data menu
(Chapter 2). Keyboard data entry is often preferred when the amount of data being
entered is small.

Statistix can read text files created using a word processor, a spreadsheet program,
or some other PC program. Statistix can also read Excel spreadsheet files. Text
files provide a standard data exchange format between Statistix and other
programs. Use the Statistix Import Excel or Import Text procedures to create or
augment data sets using data stored in spreadsheet or text files. Likewise, Statistix
data can be exported to Excel or text files using the Export Excel or Export Text
procedures.

While running Statistix, your data set is temporarily stored in memory. Before you
exit Statistix, you should save your data set using either the Save or Save As
procedure. A Statistix data file is a “snapshot” of Statistix’ data memory, and it’s
an ideal method of storing data sets for future Statistix analyses. The advantages
of Statistix data files are that they can be read and written very rapidly; are compact
in terms of the disk space occupied; and preserve such Statistix information as
variable names, labels, and the case omit status. Statistix data files are described in
more detail in Chapter 3. Statistix data sets are retrieved using the Open
procedure.

A Statistix data set is dynamic. You can increase or decrease its size in a variety of

14 Statistix 10 User’s Manual


ways. For example, you can delete cases and variables when you no longer need
them to conserve space. New cases and variables can be added from the keyboard,
imported from text and Excel files, or merged from Statistix data files. You’ll
often use Transformations to create new variables. Some of the statistical
procedures can produce new variables, too. For example, Linear Regression can
save residuals and predicted values as new variables. A Statistix data file can be
saved at any time.

Statistix 10 can also open data files created by earlier versions of Statistix.
However, earlier versions of Statistix can’t open Statistix 10 data files. The Save
As procedure includes an option to save data using older file formats for backward
compatibility (see Chapter 3).

Statistix Dialog Boxes


Once you select a procedure using the menus, a dialog box is displayed on your
screen. A dialog box is a window you use to instruct Statistix on the details of a
data management operation or statistical procedure.

Statistix dialog boxes look like the dialog boxes you’ve seen in other Windows
applications. They contain the familiar buttons, check boxes, list boxes, and edit
controls. To illustrate, we’ll examine the dialog box for the Linear Regression
procedure displayed below.

Like most Statistix dialog boxes, this one lists the data set variables in a list box
with the heading Variables. You specify the regression model by moving variable
names from the Variables list box to the Dependent Variable, Independent
Variables, and Weight Variable list boxes.

Chapter 1, Introduction 15
The example dialog box also has a check box with the title Fit Constant. You
check and uncheck the box by clicking on it with your mouse. The model will fit
the constant when the box is checked.

Some Statistix dialog boxes include radio buttons and edit controls. Radio buttons
are a group of buttons where you must select one and only one of the choices
available. An edit control is a box that you use to enter text.

Once you’ve specified the model, press the OK button to compute and display the
results. You can exit a dialog box without performing the analysis by pressing the
Cancel button. Press the Help button to display context sensitive help.

Variable Most Statistix procedures require that you specify which variables are to be used to
Name complete the analysis. You do this by moving variable names from the Variables
Selection list to one or more “target” list boxes. In the Linear Regression dialog box on the
preceding page, there are three target boxes: Dependent Variable, Independent
Variables, and Weight Variable.

First, highlight a variable name in the Variables list box: Point to the variable
name using the mouse and click the mouse button once. The background color
changes to show that the variable is highlighted. You then press the right-arrow
button next to the target box to which you want to move the variable. The name is
deleted from the Variables list and is added to the target box.

You can move more than one variable at a time. To highlight several variables,
press and hold down the Ctrl key, and click each variable you want to select. Once
you’ve highlighted all the variables you want, press the right-arrow button to move
them.

You can highlight a range of contiguous variables at once. While holding down
the Shift key, drag your mouse over the variables you want to highlight. The order
that the variables are listed in the Variables list box affects the usefulness of this
feature. You can have variables listed in alphabetical order or in spreadsheet order
(see General Settings on page 18). Press Ctrl-A to highlight all the variables.

In some situations you can select and move a variable from one list box to another
by double-clicking on the name. This is a quick way to move a variable, but it can
only be used when the destination is obvious.

You can also drag one or more variables from one box to another using your
mouse. First highlight one or more variables, then while holding down the mouse
button, move the mouse to a different box, and then release the mouse button. You
can also rearrange the order of variables in a box by dragging them to a new
position in the same box.

16 Statistix 10 User’s Manual


Saving Dialog The variable lists, file names, and other details that you enter on a dialog box are
Boxes automatically saved when you press the OK button. When you select a procedure
for the second time, the data you entered previously automatically reappear. You
can press OK and rerun the same analysis, or you can make changes to the dialog
box. When you save a data set using the Save procedure, the dialog box details are
saved with the spreadsheet data. When you open the data file later, the dialog box
options specified earlier are available.

Results Page
When you specify an analysis using a dialog box and then click the OK button, the
analysis is computed. The resulting report or plot is displayed in a new page. In
this manual, we refer to a page that contains a report or plot a “results page” to
distinguish them from the spreadsheet data page.

When a report can’t fit in the page, scroll bars appear that you can use to scroll
through the report. You can also scroll the window’s contents using the arrow and
page keys.

If you want to change the settings used to generate a report or plot, you can click
on the “options” toolbar icon that looks like a dialog box. The dialog box
reappears with the same settings. You can change some of the settings and click
OK to recompute the report or plot.

The results page for some procedures have a “results menu”. You’ll notice that the
results menu icon that looks like a shovel (think “dig deeper”) is enabled for those
procedures. The results menu lists options for further analysis.

You can modify the appearance of a report or plot by clicking on the settings icon.
You can change features such as fonts, colors, and plot symbols. In particular,
you’ll be able to change plot titles for a plot in the active page, as discussed in the
next section.

When you’ve finished with a results page, you’ll want to close the page (although
you can leave it open for later reference). You close the results page by pressing
the close button marked by an X in the tab.

Settings
The Settings procedures are used to configure Statistix to your taste. You can
select variable order, date format, font sizes, plot colors, and more. Once you’ve

Chapter 1, Introduction 17
made your selections, they remain in effect until you change them again. Click on
the Settings icon to display the settings menu.

General Use the General Settings procedure to select variable order, date format, and other
Settings general program settings.

Statistix displays the variable names of the open data file in a list box for most data
management and statistical procedures dialog boxes. The variable name list can be
kept in alphabetical order or in spreadsheet order.

Variable names are stored using both upper and lower case letters. Check the
Report Variable Names in All Caps box if you’d prefer to have variable names
displayed using all capital letters in reports and plots.

Select the Date Format you want to use for both data entry and reports—
month/day/year, day/month/year, or year/month/day. You can also have dates
displayed using either two or four digits for the year. Dates entered with two-digit
years less than the Century Cutoff for 2-digit Years value will be entered as 21st

18 Statistix 10 User’s Manual


century dates.

Check Display Data Set Label in Plots to have a data file’s data set label displayed
at the top of all plots.

Check Create backup Statistix files if you’d like Statistix to create a backup file
each time you save a Statistix file using the name of a file that already exists.
Backup files have the extension .~sx.

Spreadsheet Select Spreadsheet Font from the Settings menu to change the font and/or font size
Font used to display data in the spreadsheet.

Click Reset to return to the default values. Click OK to save your changes.

Report Font Select Report Font from the Settings menu to change the font and/or font size used
to display reports. Only fixed-spaced fonts are listed.

Click Reset to return to the default values. Click OK to save your changes.

Plot Colors Select Plot Colors from the Settings menu to select the colors used to display the
various components of plots.

Chapter 1, Introduction 19
Press a color button to change the color of the corresponding component. A color
dialog box will appear from which you can select a basic color, or create a custom
color. Press the down-arrow buttons for Fill Colors or Line Colors to display a
menu of color palettes, such as, greyscale, modern, or pastels. Doing so selects a
set of six predefined colors all at once.

Click Reset to return to the default values. Click OK to save your changes.

Plot Fonts Select Plot Fonts from the Settings menu to select fonts, font sizes, styles, and
colors used to display the plot titles.

Click Reset to return to the default values. Click OK to save your changes.

Plot Symbols / Select Plot Symbols / Line Patterns from the Settings menu to select symbols used
Line Patterns to mark points of scatter plots and the line patterns used on line plots like the time
series plot. Check the Display Grid Lines checkbox to have gridlines displayed on

20 Statistix 10 User’s Manual


plots. Click Reset to return to the default values. Click OK to save your changes.

Plot Titles Procedures that produce a plot generate default titles based on the variables and
values selected in the corresponding dialog box. You can change the titles of a
plot in the active page by selecting Plot Titles from the Settings menu. You can
also right-click on a plot to display a popup menu that includes plot titles.

You can edit, add, and delete titles for the plot. Pressing the OK button redisplays
the plot with the new information. Pressing the Reset button resets all the titles
back to those generated automatically by Statistix.

You can also change the size and color of fonts used to display individual plot
titles by selecting Plot Fonts from the Setting menu.

Chapter 1, Introduction 21
C H A P T E R

2
Data Menu
The Data Menu procedures are used to enter data and manipulate data already
entered into Statistix.

The Insert procedures are used to add variables and cases to the active
spreadsheet. Use it to create new data files.

The Delete procedures are used to remove variables and cases from the active
spreadsheet. Cases can be deleted by specifying a range of case numbers, or all
omitted cases can be deleted.

The Fill procedure is used to fill consecutive cells with a single value.

The Transformations procedure is used to modify the values of existing variables


and to create new variables using algebraic expressions and built-in transformation
functions.

The Recode procedure is used to change a list of values of an existing variable to a


single new value.

The Indicator Variables procedure is used to create variables that use the values 0
and 1 to indicate the absence and presence of a factor.

The Stack operation stacks several variables end-to-end to create a single long
variable. The Unstack operation unstacks one variable into several shorter ones.

The Transpose operation transposes a table of data reversing the roles of variables
and cases.

The Gram-Schmidt operation orthonormalizes a set of vectors.

The Omit/Select/Restore Cases procedure lets you “hide” or “omit” specified


cases from the program. These rows are ignored by Statistix, although they can be
restored at will. If a case is not omitted, we say that the case is selected. The omit
status of a case refers to whether it’s omitted or selected.

The Sort Cases procedure is used to sort the cases of a data set based on the values
of one or more variables.

The Rename Variables procedure is used to rename one or more variables.

22 Statistix 10 User’s Manual


The Reorder Variables procedure is used to reorder the variables as they appear
on the spreadsheet left to right.

The Column Formats procedure is used to control the column widths and
numerical formats used to display each variable.

The Labels procedures are used to enter a data set label, variable labels, and value
labels. These labels appear on reports and plots.

Spreadsheet The current Statistix data set is displayed in the spreadsheet page. The file name
Page appears in the tab used the select the spreadsheet page. The word “Untitled”
appears in the title bar for a new data set that hasn’t been saved yet. The variable
names appear along the top of the spreadsheet page, and the case numbers appear
along the left margin. Both selected and omitted cases appear on the spreadsheet.
The case numbers and values of omitted cases are dimmed to remind you that the
cases are omitted. The values of omitted cases can be changed.

When the spreadsheet page is the active Statistix page, you can enter data in the
cells, scroll through the data using the scroll bars, and manipulate the data using
the Data procedures described in this chapter. You can make the spreadsheet page
the active page by clicking on the leftmost tab that appears below the toolbar.

Chapter 2, Data Menu 23


The cell at the current position in the spreadsheet is highlighted. The value at the
current position can be changed simply by entering a new value and pressing Enter.
Enter the letter M for missing values for integer, real, and date variables. To
partially change the value of a cell without retyping the entire entry, first highlight
the cell, then press F2 or click on the string in the highlighted cell. Press Esc to
undo the changes to the current cell.

You can move the current position around the spreadsheet using the arrow keys,
page up, and page down keys. You can move the current position to a different
cell by clicking on that cell with your mouse.

Variable Data Statistix can handle five types of data: real, integer, date, ANSI string, and Unicode
Types string. A variable can only contain values of one data type.

The “real” data type is used to represent floating point numbers in Statistix. This
format is the most flexible offered by Statistix and is used as the default data type
when creating new variables.

Integer data in Statistix are whole numbers in the range !32767 to 32767. This
data type uses less storage space than the real data type. You use the integer data
type instead of the real data type, when appropriate, to increase the data set
capacity of Statistix. This also saves disk space by reducing the size of Statistix
data files.

The “date” data type is used to represent dates.

The ANSI and Unicode string data types are used to enter alphanumeric data, such
as a subject’s name. String variables can be used as grouping variables for
statistical procedures that compute results by group.

An ANSI string can only handle ANSI characters, a limited set of Latin-based
characters. It’s adequate for English and some other European languages.

Unicode strings can include any Unicode character, which include characters of
most of the world's writing systems. Depending on how Windows is set on your
computer, you may be able to enter characters that don’t appear on your keyboard
using an Alt sequence. You can also use the Windows Character Map utility and
paste characters into Statistix.

When typing numbers using Statistix you can enter a number in either decimal
format (e.g., 2.45) or exponential format (e.g., 1.23E+05). Enter the letter M to
indicate a missing value for integer, real, and date variables, but not for string
variables. A blank string is the missing value equivalent for string variables.

A variable’s data type is established when you create it. Variables can be created
using several of the Data procedures discussed in this chapter including Insert

24 Statistix 10 User’s Manual


Variables and Transformations. Variables are also created using the Import
procedures discussed in Chapter 3. You indicate the data type of a new variable by
typing the letter I, R, D, S, or U in parentheses after the variable name for integer,
real, date, ANSI string, or Unicode string. String types require a number after the
letter S or U to indicate the maximum length. For example, FirstName(S12)
creates an ANSI string variable named FirstName with maximum length 12.

Edit Menu In addition to the data management procedures listed on the Data menu, you can
manipulate the spreadsheet data using the Cut, Copy, and Paste commands on the
Edit menu. The Cut and Copy commands copy selected spreadsheet cells to the
Windows clipboard, and the Paste command retrieves information from the
clipboard. You can use these commands to move a block of cells from one place
on the spreadsheet to another, or to export the block to another Windows
application. You can also import data from other programs via the clipboard using
the Paste command.

Before you can use the Cut and Copy commands, you have to select part of the
spreadsheet to cut or copy. Use your mouse to select cases, variables, or a
rectangular block of cells. To select a single case, click on the cell containing the
case number. The whole case is highlighted to show that it’s selected. You can
select a range of cases by clicking on the case number for the first case in the
range, and dragging the mouse to the last case in the range. You can select
noncontiguous cases, hold the Ctrl key down and click on the case number for the
cases you want to select.

To select a variable, click on the variable name at the top of the spreadsheet. To
select a range of variables, click on the first variable name, then drag the mouse to
the last variable name in the range. To can select noncontiguous variables, hold
the Ctrl key down and click on the variable names you want to select.

To select a single cell, click on that cell. To select a range of cells, click on one
corner of the rectangle, then drag the mouse to the opposite corner of the rectangle.
The cells are highlighted as you drag the mouse so it’s easy to see what you’ve
selected. You can also select a range of cells by clicking on one corner of the
rectangle, and the clicking on the opposite corner while holding down the Shift
key.

Once you’ve selected cases, variables, or cells, use the Cut or Copy command to
copy the selected data to the clipboard. The Cut command copies the selected
cells to the clipboard, and then deletes them from the spreadsheet. The Copy

Chapter 2, Data Menu 25


command copies the selected cells to the clipboard without deleting them. The
information remains on the clipboard until you next use the Cut or Copy command
in Statistix or another Windows program. Press Del deletes the selected cases,
variable, or cells without first copying the information to the clipboard.

Information on the clipboard, whether it was copied there from Statistix or another
program, can be pasted anywhere in the Statistix spreadsheet. To paste
information, first click on a cell to select the insertion point. Then, select the Paste
command from the Edit menu. The pasted data overwrites the data of existing
variables starting at the current position. When you paste data on the spreadsheet,
keep in mind that the data must be consistent with the data types of the target
variables.

When you're pasting data into an empty Statistix spreadsheet, Statistix will create
new variables to hold the data. The first row of the clipboard data can be used for
variable names, but the names must conform to the rules of Statistix variables
names. Names must start with a letter, consist of only letters, digits, and the
underscore character. Variable names can include data type information (e.g.,
lastname(s15)). If the first row doesn't have variable names, Statistix will generate
names starting with V001.

Spreadsheet Right-click anywhere on the spreadsheet to bring up the spreadsheet pop-up menu.
Pop-up Menu The menu includes the items on the edit menu discussed above, plus easy access to
some useful procedures found elsewhere.

The Transformations procedure is used to modify the values of existing variables


and to create new variables using algebraic expressions and built-in transformation
functions. It also appears on the Data menu, and is discussed in detail on page 31.

The Omit Highlighted Cases command omits all of the cases currently
highlighted. The Restore Highlighted Cases restores the status to selected for all
of the cases currently highlighted. See Cases and Data Subsets in Chapter 1 for a
description of selected and omitted cases. See the Omit/Select/Restore Cases
procedure on page 41 for an alternative method of omitting cases.

Selecting Spreadsheet Font brings up a dialog box that allows you to change the
font and font size used to display values in the spreadsheet.

26 Statistix 10 User’s Manual


Insert Variables
Use Insert Variables to add variables to a new or existing data set. The dialog box
appears on the next page. The existing variables, if any, are listed in the Variables
list box for your reference. You list a variable name for each column of data you
intend to enter in the New Variable Names box. Variable names must start with a
letter (including non-English Unicode letters) and can contain letters, digits, and
the underscore character. Variable names can be up to nine characters long.

In Statistix data can be integer, real, date, ANSI string, or Unicode string.
However, a particular column can only be used to store one type of data. You can
specify the data type of variables when you list the variable names. Use the letters
I, R, D, S, and U in parentheses after the variable names to identify integer, real,
date, ANSI string, and Unicode string types. String types require a number after
the letter S or U to indicate the maximum length. If you don’t specify a data type,
real is assumed.

In the dialog box above, the variable Student is an Unicode string variable with a
maximum length of 15 characters. The variable Grade is an ANSI string variable
with length 1. No data type is specified for the variables Exam1, Exam2, and
Exam3, so they’re assigned the real data type.

The variable list Exam1, Exam2, and Exam3 was abbreviated using what we call
Var01 .. Var99 syntax. A data type entered at the end of the list is applied to all of
the variables in the list (e.g., Q1 .. Q15(I)).

When your spreadsheet already contains data, you can select the Insertion Point:
Before Current Column, or After Current Column. The current column is the
one with the variable name highlighted. Press OK to begin data entry. For a new
data set, an empty spreadsheet will appear, as shown below.

Chapter 2, Data Menu 27


To enter data, simply enter values for each cell and press Enter. Pressing Enter or
Tab advances the cursor toward the right. Press Shift-Tab to move to the left.
Press the up arrow to move up and the down arrow to move down. Enter the letter
M for missing values in integer, real, and date variables, but not for string
variables.

You can use the arrow keys to go back and correct errors anytime. Simply type in
a new value over the existing value in a cell. Press F2 to edit the string at the
current cell. You can also move to a different cell by clicking the mouse on that
cell.

Insert Cases

First enter the case number where you want to insert the new cases. Then enter the
number of new cases to insert. Press OK. New cases appear in the spreadsheet
page as rows of M’s representing missing values. Enter your data by typing in the
actual values over the M’s.

You can also add one new case at the bottom of the spreadsheet by pressing the
down arrow key when the active cell is at the last case.

28 Statistix 10 User’s Manual


Delete Variables
Move the variables you want to delete from the Variables list box on the left to the
Delete Variables list box on the right. First highlight one or more variables in the
Variables box, then press the right-arrow button to move them. You can move a
single variable by double-clicking on the name with your mouse. You can also use
your mouse to drag variables from one box to another. Press the OK button to
delete the variables in the Delete Variables box.

An alternative method of deleting variables is to highlight one or more variables


using your mouse, and then pressing the Delete key. To select a variable, click on
the variable name at the top of the spreadsheet. To select a range of variables,
click on the first variable name, then drag the mouse to the last variable name in
the range. To select noncontiguous variables, hold the Ctrl key down and click on
the variable names you want.

Delete Cases
Use this procedure to delete one case or to delete a contiguous block of cases.

Enter the first and last case numbers of the range of cases you want to delete, then
press the OK button.

Chapter 2, Data Menu 29


An alternative way to delete cases is to highlight one or more cases using your
mouse, and then pressing the Delete key. To select a single case, click on the cell
containing the case number. The whole case is highlighted to show that it’s
selected. You can select a range of cases by clicking on the case number for the
first case in the range, and dragging the mouse to the last case in the range. You
can select noncontiguous cases, hold the Ctrl key down and click on the case
numbers for the cases you want to select.

Delete Omitted Cases


You can also delete all omitted cases. This method of deleting cases gives you
greater flexibility in selecting which cases to delete. The Omit/Select/Restore
Cases procedure discussed on page 41 can be used to omit cases based on data
values rather than case numbers.

Delete Highlighted Cells


This procedure lets you delete an arbitrary rectangle of cells, and optionally shift
the remaining cells up or to the right to fill the void. First highlight the cells you
want to delete. To highlight a single cell, click on that cell. To highlight a range
of cells, click on one corner of the rectangle, then drag the mouse to the opposite
corner of the rectangle. The cells are highlighted as you drag the mouse so it’s
easy to see what you’ve selected.

Once you’ve highlighted a range of cells, select Delete Selected Cells from the
Data menu, and the dialog box shown below.

Select one of the Shift Cells radio buttons. Select Shift Cells Left to have data to
the right of the deleted rectangle shifted left. Select Shift Cell Up to have data
below the deleted rectangle shifted up. Select Don’t Shift Cells to fill the deleted
cells with missing values.

30 Statistix 10 User’s Manual


Fill
The Fill command is used to fill a number of contiguous cells with a particular
value.

Cells will be filled starting at the current position. First select the direction in
which you want the filling to proceed from the Fill Direction radio buttons, either
down or right. Enter the number of cells you want to fill in the Number of Cells
box. Enter the value you want to use to fill the cells in the Fill Value box. If you
select Down for the fill direction, new cases will be added to the bottom of the
spreadsheet if necessary. If you select Right, then filling will automatically wrap
around when the end of a case is reached and will continue on the next row.

Transformations
This powerful procedure is used to create new variables or to alter the values of the
existing ones. It has two general forms: simple assignment and conditional
assignment using the If-Then-Else construct.

If you’re not familiar with arithmetic and logical expressions, please read the
Arithmetic and Logical Expressions section on page 49.

Specification The Transformations Expression box in the center of the dialog is where you
build your transformation. The data set variables are listed in the Variables list
box on the left. The Statistix built-in functions are listed in the Function list box
on the right. Below the Transformation Expression box is an area where error
messages are displayed. The error message “Unknown variable or function Tunc”
in the example below is reporting that the built-in function Trunc has been
misspelled.

Chapter 2, Data Menu 31


The transformation expression must be typed in manually. You can, however,
move variables from the Variables box and built-in functions from the Functions
box to the Transformation Expression box using various methods. You can drag a
variable of function using your mouse and drop them anywhere in the Expression
box. If you double-click on a variable or function name, it will be inserted into the
Transformation Expression box at the cursor’s current position. You can also
highlight one or more variables in the Variables box and then press the right-arrow
button to insert them into the Transformation Expression box at the cursor’s
current position. You can move the cursor in the box using the arrow keys or the
mouse.

You’re only allowed to enter a single transformation in the Transformation


Expression box. The control has several lines so that you can enter a long
expression.

Once you’ve entered an expression, press the Go button. If Statistix finds an error
in your expression, you’ll see an error message displayed below the Transfor-
mation Expression box. Edit your expression to correct the error and press Go
again. Press the Clear button to erase the contents of the Transformation box.

Clicking on the down-arrow to the right of the Transformations Expression box


displays a drop-down list of previously entered expressions. Selecting an
expression from the list copies it to the main expression box. The history of
transformation expressions you’ve used is saved as part of your data file so they
can be reused later.

Simple Suppose the variable NewVar is a new one you want to create or an existing one
Assignment that you want to alter. Such a variable is called a target variable. In a simple
assignment, a target variable is simply equated with an arithmetic expression:
{target variable} = {arithmetical expression}

32 Statistix 10 User’s Manual


To give a specific example, suppose you want NewVar to be the sum of the
variables A, B, and C.
NewVar = A + B + C

A new variable called NewVar has now been created with the sum of the variables
A, B, and C. If a variable called NewVar already exists, the variable’s values will
be replaced by the sum A + B + C.

A target variable’s name can appear on both sides of an assignment statement


provided the variable already exists.
Target = 2.75 + Sqrt(Target)

Sqrt is an example of a built-in function that computes the square root of the
argument. The built-in functions are described on page 53.

Statistix can handle four types of data: real, integer, date, and string. Use the
letters I, R, D, S, and U in parentheses after the name of the new variable to
identify integer, real, date, ANSI string, and Unicode string types. String types
require a number after the letter S or U to indicate the maximum length. If you
don’t specify a data type, real is assumed. The examples below illustrate how
integer, date, and string variables can be created.
Count(I) = A + B + C
DueDate(D) = SaleDate + 30
FullName(S30) = FirstName + “ ” + LastName

Date and string constants must be enclosed in quotation marks.


Salute(S20) = “Ms. ” + LastName
Length = LastDay ! “1/1/95”

Conditional On occasion you may want to apply one assignment to the target variable when
Assignment some condition is met and another assignment when the condition is not met.
Conditional assignments employing the If-Then-Else construct are designed for
this purpose. Its general form is:
If {logical expression}
Then {target variable} = {some arithmetic expression}
Else {target variable} = {some other arithmetic expression}

If a specified logical expression for a particular case is true, the Then clause is
performed; otherwise, the Else clause is performed. This construct is quite
flexible, as the following examples illustrate.

Suppose you want to create a new variable—AgeGroup—based on the values of a


variable Conc. You want Agegroup to be the tens digit of age, but you want to
group all ages of 60 or greater into one group.
If Conc < 60
Then Agegroup = Trunc(Age/10)
Else Agegroup = 6

The three key words—If, Then, and Else—needn’t be on separate lines. A short
statement can be written on one line. The Else expression can be omitted from a

Chapter 2, Data Menu 33


conditional transformation, in which case the target variable is left unchanged
when the logical expression is not true.
If Agegroup > 6 Then Agegroup = 6

The logical expression can include any valid arithmetic expressions, and the
arithmetic expressions can include the target variable if it already exists.
If (A + B) <= (1.25 ( Sin(C))
Then A = 0.0
Else A = A + D + E

Converting All of the values of a particular variable must be of the same data type, either
Variable integer, real, date, or string. The data type of an existing variable can be changed
Types using a transformation. For example, suppose that the variable Conc was
originally created as a real variable and contains values for cholesterol
concentration. You can convert the variable Conc to an integer variable using the
transformation:
Conc(I) = Conc

If a value for Conc exceeded the limits for an integer variable (!32767 to 32767),
the result would be missing. An integer variable can just as easily be converted to
a real variable:
Height(R) = Height

Sometimes you end up with string variables that contain numbers or dates. Since
Statistix can’t do arithmetic using string variables, it’s best to convert string
variables of these types using the special string conversion functions Date and
Number.
BirthDate(D) = Date(BirthDate)
Level(R) = Number(Level)

The maximum length of a string variable can be changed using a transformation.


Suppose you had created a variable Name (S15), you could increase the maximum
length using the transformation:
Name(S20) = Name

You can convert ANSI string variables to Unicode string variables and visa-versa.
For example, a Unicode string variable Country can be converted to an ANSI
string variables using the transformation:
Country(A20) = Country

If the original string variable contains Unicode characters that aren’t part of the
ANSI set of characters, it’s replaced by a question mark.

Missing If a number used in an arithmetic expression is missing, the result of the


Values expression is also missing. After all, you can’t perform arithmetic on numbers that
don’t exist. Consider the transformation:
A = B + C

34 Statistix 10 User’s Manual


If the value for the variable C is missing for a particular case, the expression B + C
is missing and the target variable A is assigned the missing value.

In the logical expression of a conditional transformation, it makes sense to make


tests of equality using missing values. The expressions If X = Y and X <> Y are
evaluated normally if either X or Y is missing. However, a number can’t be less
than or greater than a missing value. So when X or Y is missing in an expression
like If X < Y, the logical expression can’t be evaluated and the target variable is
assigned a missing value.

The treatment of missing values is discussed at greater length in the Arithmetic


and Logical Expressions section on page 49.

Omitted Cases omitted using Omit/Select/Restore Cases are ignored when a transformation
Cases is performed. If the target variable is a variable that already exists, omitted cases
retain their old values, i.e., they aren’t transformed. If the target variable doesn’t
already exist, omitted cases are assigned the missing value when the
transformation is performed.

Recode

The Recode procedure is used to replace (or “recode”) a list of values of a variable

Chapter 2, Data Menu 35


with a new value. It’s most useful when you want to reduce the number of values a
variable has by replacing them with new values that identify groups of the original
values.

First select the variable you want to recode from the Variables list and copy it to
the Source Variable box. Next, identify the Destination Variable. You can select
an existing variable, or enter the name of a new variable. The source and
destination variables can have any data type (integer, real, date or string) and they
needn’t have the same type.

Finally, enter lists of values of the source variable in the column of boxes titled
Old Values, and a corresponding new value for the destination variable in the
column of boxes titled New Values. When entering a list into an Old Values box,
separate values with commas or spaces. You can specify a range of values using a
dash (-) or colon (:). Use the plus sign to indicate an open ended range of values.
When listing the values of a string variable, long strings, or strings that contain
spaces or commas must be enclosed in quotation marks (e.g., “George
Washington”, “Abe Lincoln”).

Indicator Variables
An indicator variable (also called a dummy variable) uses the values 0 and 1 to
indicate the absence or presence of a factor or condition. This procedure creates
indicator variables based on the values of an existing variable.

For example, suppose you have a categorical variable named Dose that contains

36 Statistix 10 User’s Manual


the values 5, 10, and 15, and you want to create three indicator variables named
Low, Med, and High. In the dialog box below, the variable Dose was moved from
the Variables list box to the Source Variable box. The three values 5, 10, and 15
were then entered into the Value List box. The names of the three new variables
Low, Med, and High were then entered into the Indicator Variables box.

For cases where Dose has the value 5, the new variable Low will be given the
value 1; otherwise, the value would be 0. For cases where Dose has the value 10,
the new variable Med will be given the value 1; otherwise, the value would be 0.
For cases where Dose has the value 15, the new variable High will be given the
value 1; otherwise, the value would be 0.

The source variable containing the existing levels can be an integer, real, date, or
string variable. The levels listed for an integer or a real variable must be
nonnegative whole numbers. String values in the list of levels must be enclosed in
quotation marks.

The list of levels can be omitted if the levels are consecutive numbers starting with
1. For example, if Dose had contained the levels 1, 2, and 3, the Value List could
have been left empty.

Stack
The Stack procedure is used to stack several variables end-to-end to create a single
long variable.

Data to be analyzed can often be broken down into groups of interest. For
example, you may be interested in cholesterol concentration by age group, or fat
absorption of doughnuts by the type of fat used in cooking. There are two ways

Chapter 2, Data Menu 37


data of this type can be presented. One method is to store the data for each group
in its own variable. A second method is to put all the data into one variable, and
use a second categorical variable to identify the groups. You can use the Stack
procedure to transform data stored in the first format to data stored in the second.

First, move the names of the variables you want to stack together from the
Variables list box to the Source Variables list box. Then, enter the name of a new
or existing variable to capture the stacked data in the Destination Variable box.

You can also specify a Class Variable. The class variable is assigned a number for
each source variable, starting with number 1 for the first source variable. You can
use it to tell which source variable each item of data in the destination variable
originated.

In the example dialog box, the variables Quick and Standard are selected as the
source variables. A new variable Conc has been entered as the destination
variable. The new variable Method has been entered as the class variable and will
capture the group numbers 1 and 2 for the source variables Quick and Standard,
respectively. The resulting data set, which contains both the original variables and
the new variables, is presented below.

Quick Standard Conc Method


23 25 23 1
18 24 18 1
22 25 22 1
28 26 28 1
17 M 17 1
25 M 25 1
19 M 19 1
16 M 16 1
M M 25 2
M M 24 2
M M 25 2
M M 26 2

Unstack
The Unstack procedure is used to break one long variable into two or more shorter
variables. The data in the original variable is divided up between the new
variables based on the values of a class variable.

The Unstack procedure is the reverse of the Stack procedure discussed above.
We’ll illustrate the Unstack procedure using the same data.

Suppose you start with the variables Conc and Method, and you want to make the
two variables Quick and Standard. You’d use the Unstack procedure to do this, as
illustrated in the dialog below.

38 Statistix 10 User’s Manual


Both the Source Variable and the Class Variable are required. The class variable
can be any data type (integer, real, date, or string). There can be no more than 500
groups. List the Destination Variables. You can use Var1 .. Var99 syntax to
abbreviate the list. If there are n values for the class variable, you must enter
exactly n variables in the list.

Transpose
Most Statistix procedures operate on columns of data. You may on occasion want
to obtain statistics for rows of data. The transpose operation is used to copy data
from rows of selected variables to a new set of variables, thus reversing the role of
cases and variables.

This is best explained using an example. Suppose we have three variables A, B,


and C with four cases of data:

Case A B C
1 1 2 3
2 4 5 6
3 7 8 9
4 10 11 12

To transpose this table of data, we specify the variables A, B, and C as the Source
Variables. Since there are four cases, we need four Destination Variables. We
list the new variables D, E, F, and G, as illustrated in the dialog box on the next
page.

Chapter 2, Data Menu 39


The results are shown below.

Case A B C D E F G
1 1 2 3 1 4 7 10
2 4 5 6 2 5 8 11
3 7 8 9 3 6 9 12
4 10 11 12 M M M M

Note how the values 1, 2, and 3, reading left to right for variables A, B, and C now
appear reading from top to bottom for variable D.

Gram-Schmidt
In any inner-product space, we can choose the basis in which to work. It often
simplifies calculations to work in an orthogonal basis. Given an arbitrary basis
{v1, v2, ..., vn} for an n-dimensional linear-product space V, the Gram-Schmidt
algorithm constructs an orthonormal basis for V.

40 Statistix 10 User’s Manual


If you provide this algorithm with a list of variables representing vectors, it creates
new variables containing the values of a orthonormal basis (orthogonal and
normalized).

To use this procedure, move the variables representing your vectors to the Source
Variables box. Then, type the names of variables that will capture the
orthonormalized vectors in the Destination Variables box. You can use existing
variables, and you can use Var1 .. Var99 to abbreviate the list. If you select n
source variables, you must specify n destination variables.

Missing values are not allowed. Omitted cases are ignored. All variables should be
of the real data type. See Golub and Van Loan (1996) for computational details.

Omit/Select/Restore Cases
If you want to analyze a select subset of the cases in Statistix, the Omit/
Select/Restore Cases procedure lets you temporarily “hide” some of your data
from the program. Once cases are omitted, they’re ignored by the statistical
procedures until they’re “restored”. Omitted cases can also be selectively restored
using a “select” statement. The omit status of cases are saved when you use the
Save procedure.

You specify the cases you want to omit using logical expressions. If you’re not
familiar with logical expressions, please refer to the Arithmetic and Logical
Expressions section on page 49.

The Omit/Select/Restore Expression box in the center of the dialog is where you
build your omit statement. The data set variables are listed in the Variables list

Chapter 2, Data Menu 41


box on the left. The Statistix built-in functions are listed in the Function list box
on the right.

The omit expression must be typed in manually. You can, however, move
variables from the Variables box and built-in functions from the Functions box to
the Omit/Select/Restore Expression box using various methods. You can drag a
variable of function using your mouse and drop them anywhere in the Expression
box. If you double-click on a variable or function name, it will be inserted into the
Expression box at the cursor’s current position. You can also highlight one or
more variables in the Variables box and then press the right-arrow button to insert
them into the Transformation Expression box at the cursor’s current position. You
can move the cursor in the box using the arrow keys or the mouse.

You’re only allowed to enter a single omit statement in the Omit Expression box.
The control has several lines so you can enter a long expression.

Once you’ve entered an expression, press the Go button. If Statistix finds an error
in your expression, an error message is displayed below the Omit Expression box.
Edit your expression to correct the error and press Go again. Press the Clear
button to erase the contents of the Omit Expression box.

Clicking on the down-arrow to the right of the Omit Expression box displays a
drop-down list of previously entered expressions. Selecting an expression from the
list copies it to the main expression box. The history of omit expressions you’ve
used is saved as part of your data file so they can be reused later.

You can enter two types of statements: “omit” and “select”. The omit statement
has the general form Omit {logical expression}. Likewise, the select statement has
the form Select {logical expression}. Press the Restore button to restore the omit
status of all cases to be selected.

The omit status of a case can be either selected or omitted. When a data set is
created, all cases are selected. You can use the omit statement to selectively omit
cases that currently have the status selected; the omit statement doesn’t change the
status of cases already omitted. Use the select statement to change the status of
omitted cases back to selected; the select statement doesn’t change the status of
already selected cases.

Once you’ve entered and successfully executed an omit expression, the number of
cases omitted or selected is reported. The number of cases currently selected and
omitted are displayed in the status bar at the bottom of the spreadsheet page.

The successful omit or select expression remains in the Omit Expression box and
can be edited to create a new omit expression. To delete a previous expression
from the box, press the Clear button.

The effects of sequential omit expressions are cumulative; a second omit

42 Statistix 10 User’s Manual


expression will only act upon the cases not omitted by the first expression. Thus,
the two expressions “Omit Conc < 10” and “Omit Sex <> ‘F’” entered one after
the other have the same effect as the single expression “Omit Conc < 10 OR
Sex <> ‘F’”.

The omit expression can be as complex as you like and can span all of the lines in
the box. You can press Enter while typing an omit expression to advance to the
next line.

A useful tip for using the omit command takes advantage of the fact that the
arithmetic involving missing values always results in a missing value. Thus, the
omit expression
Omit Conc + Height + Sex + Weight = M

will omit a case if any of the variables have a missing value for the case and is
easier to type than the alternative:
Omit Conc = M or Height = M or Sex = M or Weight = M

Please see the Arithmetic and Logical Expressions section on page 49 for more
information about logical expressions. The built-in functions are defined on page
53.

Sort Cases
Statistix offers a Sort Cases procedure to sort the cases of a data set into ascending
or descending order based on the values of selected key variables.

Highlight the variables you want to use as the key variables in the Variables list
box and press the right-arrow button to move them to the Key Variables box. The

Chapter 2, Data Menu 43


order in which the variables appear in the Key Variables list box is important. The
cases will be sorted by the first variable first, then by the second variable within
the first, and so on.

Any type of variable (integer, real, date, or string) can be used as a key variable.
Sorting using string keys is not sensitive to upper and lowercase letters (e.g., the
keys “Iowa” and “IOWA” will be treated the same).

In most applications, you’ll want the cases kept intact so that the values for the
non-key variables are moved to their new positions along with the key variable
values. Occasionally, you’ll want to sort the values of a variable without
disturbing the order of some or all of the remaining variables. In such cases, move
the variables you want excluded to the Exclude Variables list box.

You can have your data sorted into ascending or descending order. Indicate your
preference by selecting one of the Sort Order radio buttons.

Omitted cases are sorted along with selected cases. The omit status of a case
moves with the key variables.

Rename Variables
Use this procedure to rename variables in your open data file.

Simply type in new variable names in the New Names box next to the old names
of the variables you want to rename. Leave the space blank for variables you don’t
want to rename.

44 Statistix 10 User’s Manual


Reorder Variables
The Reorder Variables procedure is used to change the order in which the
variables are displayed in the spreadsheet page. If you’ve selected spreadsheet
order rather than alphabetical order using the General Settings procedure (see
Chapter 1), it will also change the order that variables are listed in the Variables
list boxes that appear in dialog boxes.

The variables in the Variables list box are listed in the order they currently appear
in the spreadsheet. You move the variables to the New Variable Order list box in
the order you want them to appear in the spreadsheet. First highlight one or more
variables in the Variables box, then press the right-arrow button to move them.
You can move a single variable by double-clicking on the name with your mouse.

Use your mouse to drag variables from one box to another. You can also reorder
variables within a box by dragging them to a new position. You don’t have to
move all the variable to the New Variable Order box. Any variables remaining the
Variables box will retain their relative order and appear after the variables in the
New Variable Order box. Press OK to save your changes.

Column Formats
This procedure allows you to control the column width of data stored in variables,
and the numerical format of real variables. The widths and formats you choose
affect both the appearance of the columns in the spreadsheet page and columns
displayed in the Print report.

First, highlight one or more variable names in the Variables list box. The data

Chapter 2, Data Menu 45


type of the first variable you highlight appears on the dialog box. Only variables
with the selected data type will be changed. In the example dialog box on the next
page, the highlighted variable LotSize is a real variable.

For string, date, and integer variables, you can only change the column width
(number of characters). For real variables, choose a format by selecting one of the
four Numeric Format radio buttons, then enter values for Width and Decimal
Places (Significant Digits for decimal and exponential formats). Be sure to enter a
width large enough to account for the decimal point and the minus sign for
negative numbers. Press the OK button to save your changes.

You can apply a single column format to more than one variable at a time. To do
so, highlight one of the variables you want to format; the current column format
values for that variable appear in the dialog. Next, highlight additional variables
you want to format. You can do this by clicking on the variable names while
holding down the Ctrl key. You can also select a range of contiguous variables by
clicking on the first variable in the range and dragging the pointer to the last
variable in the range while holding the mouse button down.

The Automatic format displays numbers using an integer format for whole
numbers, or a decimal format when the number contains a fraction. For numbers
with a fraction, as many digits as possible will be displayed, but trailing zeros are
trimmed. The automatic format works well for data you’ve entered manually
because the numbers generally are displayed just as you’ve entered them. For
computed variables, such as variables created using the Transformations
procedure, the automatic format often displays nonsignificant digits.

The Fixed format displays numbers in a decimal format with a fixed number of
decimal places.

The Decimal format displays numbers using a decimal format where the decimal
pont is free to move to maximize the number of digits displayed. You specify the
number of Significant Digits.

46 Statistix 10 User’s Manual


The Exponential format displays numbers using scientific notation. A number
displayed in exponential format has two parts, the mantissa and the exponent. The
mantissa is a number displayed as a decimal number that’s always greater than or
equal to 1.0 and less than 10.0. The exponent is displayed using the letter E
followed by a signed integer. The number represented in this fashion is the
mantissa multiplied by 10 raised to the exponent. For example, the number
4.23E!02 is equal to 4.23 × 10!2, or 0.0423. You specify the number of
Significant Digits, which is the number of digits in the mantissa.

Labels
The Labels procedure is used to enter or change the current values of three kinds
of labels: the data set label, variable labels, and value labels. These labels are used
to annotate Statistix reports and graphs. After selecting Labels from the Data
menu, you’re presented with a pop-up menu with the three selections: Data Set,
Variable, and Value.

The File Info report on the File menu displays all of the labels for the open file.

Data Set The Data Set Label is a one line heading used to describe the data set. It’s printed
Label at the beginning of each Statistix report.

Variable Variable Labels are descriptive comments for variables. Variable labels remind
Labels you what data the variables contain and how they were created. Variable labels are
incorporated into the heading of some reports, such as the stem and leaf plot.
They’re also used for axis labels for graphs (e.g., scatter plots and histograms).

The variables and variable labels are presented in a table as shown in the example
on the next page. Type in your variable labels beside the corresponding variable.
Variable labels can be up to 40 characters long.

Click OK to save your changes.

Chapter 2, Data Menu 47


Value Labels Value Labels are descriptive strings attached to individual values for a variable.
For example, coding states using numbers may be convenient for data entry
purposes: 1 for Iowa and 2 for Nebraska in the example dialog below. Value
labels serve as comments to remind you what the codes represent. These labels
also appear in Statistix graphs and reports (e.g., cross tabulations) to improve
readability.

First, highlight the variable of interest and move it to the Source Variable box.
The current value labels of the selected variable, if any, are displayed in the Value
Labels list box. If you want to use the same labels for other variables, move the
variables to the Copy To Vars list box.

To define a label, enter a number in the Value box, and a string in the Label box,
then press the right-arrow button to add the value-label pair to the Value Labels
list. Value labels can be up to 20 characters long.

48 Statistix 10 User’s Manual


To delete a defined label, highlight the value-label pair in the Value Labels box,
then press the left-arrow button. The value and label appear in the Define Label
boxes so you have the opportunity to edit the label and add it back to the list.

When you’ve finished defining value labels for the source variable, press the Save
button to save the labels. Then select another source variable, or press the Close
button to exit.

You can copy the labels already defined for one variable to one or more other
variables. First, move the variable that has labels defined to the Source Variable
box. The variable’s labels are displayed in the Value Labels box. Then, move the
unlabeled variables to the Copy To Vars list box. Then press the Save button.

Arithmetic and Logical Expressions


The data management procedures Transformations and Omit/Select/
Restore are powerful data manipulation tools. To appreciate the full potential of
Statistix, you need to understand the principles of arithmetic and logical
expressions discussed in this section. This material will be familiar to people who
are experienced in either database management software or programming
languages.

Arithmetic The following arithmetic operators are available:


Expressions
^ Exponentiation. For example, A ^ B is A raised to the B-th power.

( Multiplication. A ( B is the product of A and B.

/ Division. A / B is A divided by B.

+ Addition. A + B is the sum of A and B.

! Subtraction or reversal of sign. A ! B is B subtracted from A. The expression


!A, unary negation, reverses the sign of A.

These operators are used to form arithmetic expressions with constants, variable
names, and built-in functions. A constant is simply a number, such as 1.96 or
3.1416. String and date constants must be enclosed in quotation marks (e.g.,
“Great Scott”, “10/12/55”). Built-in functions are described in more detail on page
53.

An example of an arithmetic expression is A + 2.75 ( B, where A and B are


variable names and 2.75 is a constant. Statistix evaluates this expression for each

Chapter 2, Data Menu 49


case by first taking the product of the constant 2.75 and the variable B. The value
of variable A is then added to get the final result.

The order in which operators in an expression are evaluated is determined by some


simple rules of precedence. The rules of precedence Statistix uses to evaluate
arithmetic expressions are the same as those used in algebra. If all of the
arithmetic operators in an expression are of equal precedence, they’re evaluated in
order from left to right. However, not all operators share equal precedence. The
following table ranks the arithmetic operators according to precedence.

Highest Precedence: ! (unary negation)


^
(, /
Lowest Precedence: +, !

Unary negation has the highest precedence and is always performed first if it
occurs anywhere in an expression. Exponentiation (^) is performed next, followed
by multiplication (() and division (/). Multiplication and division are of equal
precedence, so the order in which they appear in the expression determines which
is done first. Addition (+) and subtraction (-) share the lowest precedence among
arithmetic operators.

Any expression within parentheses is evaluated before expressions outside the


parentheses. For example, in the expression (A + 2.75) ( B, the sum A + 2.75 is
evaluated first and the result is then multiplied by B. Parenthetical expressions can
be nested, with the innermost ones being evaluated first. An example is (A + B (
(C + D)) ( E.

The simplest arithmetic expression is just a constant or variable by itself,


in which case no arithmetic operators are involved.

Date and Some arithmetic can be performed on dates and strings. A date can be subtracted
String from another date to compute the number of days between two events. For
Arithmetic example, Conc = (“05/23/92” ! Birth) / 365. You can also add a constant to a
date. Multiplication and division of dates aren’t allowed.

Addition can be performed using string variables and constants. For example, the
expression First + “ ” + Last concatenates the strings in the variables First and
Last with a space in between.

Logical When arithmetic expressions are evaluated, they return numerical values by case.
(Boolean) On the other hand, logical expressions return the boolean values True or False by
Expressions case. As a Statistix user, you’ll never actually see the values True and False. What
you’ll see are the consequences of some action that was based on whether the
expression was True or False, for example, whether a case becomes omitted or not

50 Statistix 10 User’s Manual


(see Omit/Select/Restore Cases on page 41 for details).

We now introduce two new classes of operators—relational operators and logical


operators. These operators are used to construct logical expressions, expressions
that take the boolean values True or False when evaluated. The relational
operators are:

= equal to <> not equal to


< less than <= less than or equal to
> greater than >= greater than or equal to

The logical operators are NOT, AND, and OR.

Relational operators require arithmetic expressions for arguments—one to the left


and one to the right of the operator (remember that the simplest arithmetic
expression is just a constant or a variable name). Relational operators return the
boolean values True or False when evaluated. Some typical examples of simple
logical expressions using relational operators are:
A+B>C
A = 999
A ^ 3.45 >= B / C

The embedded arithmetic expressions are evaluated before the relational operators.
All relational operators have the same precedence.

The logical operators NOT, AND, and OR are used to construct more complex
logical expressions. Logical operators require boolean arguments, or to put it
another way, the arguments for logical operators must be logical expressions.
(Note: This is why they’re called logical operators; they operate on logical
expressions.) The NOT operator requires only one argument to the right; both
AND and OR require two arguments, one on either side. The truth table presented
on the next page summarizes the action of these operators (T stands for True, and
F for False).
Argument
Values Value Returned
X Y X AND Y X OR Y NOT X
T T T T F
T F F T F
F T F T T
F F F F T

In their most general form, logical expressions are built with relational and, when
needed, logical operators. Some further examples of logical expressions are:
(A > B) AND (A = 1)
NOT ((A + B) > C)
((A = B) AND (B = C)) OR ((A <> D) AND (A < 1.96))

There are often many ways to express the same condition. Use the one that is
clearest to you, not necessarily the most “elegant”. In such expressions, embedded
arithmetic operators are evaluated first, followed by relational operators. Logical

Chapter 2, Data Menu 51


operators have the lowest precedence of any operator. NOT takes precedence over
AND and OR; AND is evaluated before OR. The order of evaluation is easy to
control with the use of parentheses. Be careful to use enough parentheses to insure
that things are evaluated in the intended order. The following table summarizes
the precedence ordering of Statistix operators and built-in functions.

Highest Precedence: parenthesized expressions


built-in functions
! (unary negation)
^
(, /
+, !
=, >, <, >=, <=, <>
NOT
AND
Lowest Precedence: OR

Machine Computers don’t perform decimal arithmetic exactly. While the rounding error
Precision and that occurs during arithmetical operations is usually negligibly small, there is one
Tests for situation where it’s extremely important: tests for exact equality. For example,
Equality suppose you want to perform the transformation Y=9(X/9. You might expect Y to
equal X, but it may not because of very small rounding errors. Therefore, it’s not
certain that the logical comparison If Y = X will return the value True. It’s safer to
perform the comparison using the expression If Abs (X - Y) < d, where d is some
small number and Abs is the absolute value function.

Handling of Missing values require special consideration when arithmetical or logical


Missing expressions are evaluated. In arithmetical expressions, if any of the arguments
Values have the value missing, the expression is automatically evaluated as missing. This
is only reasonable—you can’t perform arithmetic on numbers that don’t exist.

Logical expressions are somewhat trickier. Different actions are taken depending
on the context in which the relational expression is used. The following truth table
shows the rules used with missing values for Omit/Select/Restore Cases.
Argument
Values Value Returned
X Y X=Y X<>Y X>Y X<Y X>=Y X<=Y
M NOT M F T F F F F
M M T F F F F F

The If-Then-Else construct has the same truth table for X = Y and X <> Y.
However, it avoids the issue for the other inequalities by assigning the target
variable the value missing.

You may think it would make sense to always assign the target variable the value
missing. After all, how do you decide if two unknown values are equal? The
reason we use these rules is so you can manipulate the missing values. For

52 Statistix 10 User’s Manual


example, perhaps you want to replace all of the missing values with 0.0:
If X = M Then X = 0
Else X = X

You should be aware, however, that sometimes you may not be satisfied with these
rules. Consider, for example:
If X = Y Then NewVar = 1 Else NewVar = 2

If both X and Y are missing, Statistix will evaluate the Else expression and assign
2 to NewVar. You really want NewVar to be missing as well. This is easily done
with a second transformation, as shown below.
If (X = M) OR (Y = M)
Then NewVar = M
Else NewVar = NewVar

Remember that if the value for any variable in an arithmetic expression is missing,
the expression is evaluated as missing. This can be exploited in a variety of ways.
For example, suppose you have the variables A, B, C, D, and Y, and you want Y to
be missing whenever A, B, C, or D is missing. One way of doing this is:
If (A = M) OR (B = M) OR (C = M) OR (D = M)
Then Y = M Else Y = Y

But a more compact method is:


If A + B + C + D = M Then Y = M Else Y = Y

Built-in There are 74 built-in functions that can be included in arithmetic expressions.
Functions
Abs(x) Day(x) Modulo(i,j) SelCase
Angle(x) DayofWeek(x) Month(x) Sin(x)
Arccos(x) Delete(s,i,j) Normalize(x) Sinh(x)
Arccosh(x) Diff(x[,I]) Number(s) Sqr(x)
Arcsin(x) Exp(x) Percentile(x) Sqrt(x)
Arcsinh(x) Factorial(I) Pi String(x[,i])
Arctan(x) Floor(x) Pos(s1,s2) Studentize(x)
Arctanh(x) GeoMean(x) Power(x,y) Sum(x)
Atkinson(x) Insert(s1,s2,i) Random(dist,a,b) Tan(x)
Capitalize(s) Lag(x[,I]) Rank(x) Tan(h)
Case Length(s) Round(x) Trunc(x)
Cat(i,j) Ln(x) RowCount(v1..vn) Unitize(x)
Ceiling(x) Log(x) RowMax(v1..vn) UpCase(s)
Copy(s,i,j) LowCase(s) RowMean(v1..vn) Variance(x)
Cos(x) M RowMedian(v1..vn) Year(x)
Cosh(x) Max(x) RowMin(v1..vn) ZInverse(x)
Count(x) Mean(x) RowSD(v1..vn) ZProb(x)
CumSum(x[,k]) Median(x) RowSum(v1..vn)
Date(s) Min(x) SD(x)

Most of the functions require arguments. The arguments of most functions can be
any valid arithmetic expression. This will be represented as “x” in the following
descriptions. Note that x can include built-in functions, including the function it’s
an argument for, such as Sqrt (Sqrt (x)). If x has the value missing when it’s
evaluated, the function is also assigned as missing. In a few cases, an integer
constant is required for an argument, such as Cat (5,1). Integer constants are
represented as “i” or “j” in the following descriptions. String constants or
variables are represented as “s”. The row functions (e.g., RowMean) require a

Chapter 2, Data Menu 53


variable list for an argument and are represented as “v1..vn”. Functions Case, M,
Pi, and SelCase don’t require input arguments.

The function names appear in a list box when you’re using the Transformations or
Omit/Select/Restore Cases procedures. Functions can be selected directly from the
Functions list box and inserted into the expression box. You can also type in the
function name into an expression, in which case you can abbreviate the function
name using enough of the first characters to distinguish it from the other function
names.

Most of the functions operate using one case at a time. For example, Sqrt(x) com-
putes the square root of the expression x, case by case. Some functions compute a
single value for an entire column x. The Mean (x) is an example of this type; it
computes the mean of the column x. Column functions are normally used as part
of a larger expression (e.g., x ! Mean(x)). The nine column functions are Count,
Geomean, Max, Mean, Median, Min, SD, Sum, and Variance.

Most of the functions expect numerical arguments and return numerical results.
There are ten functions that require string arguments (Upcase, Lowcase, Length,
Pos, Copy, Delete, Insert, Capitalize, Date, and Number) and four functions that
require date arguments (Day, Month, Year, and Dayofweek).

A description of each of the functions follows.

Abs (x) Absolute value of x.

Angle (x) Computes the angular transformation (also called the arcsin-square root
transformation) for proportions. Proportions near 0 or 1 are spread out so as to
increase their variance (Snedecor and Cochran, 1980). The argument x must be a
proportion between 0 and 1. You can apply the function to a percentage by first
dividing the percentage by 100.

Arccos (x) Arc cosine of x in radians.

Arccosh (x) Inverse hyperbolic cosine of x.

Arcsin (x) Arcsine of x in radians.

Arcsinh (x) Inverse hyperbolic sine of x.

Arctan (x) Arctangent of x in radians.

Arctanh (x) Inverse hyperbolic tangent of x.

Atkinson (x) Computes the transformation for the Atkinson score method used to determine
what power transformation, if any, is needed in a linear regression analysis
(Weisberg, 1985). The argument of the transformation is the dependent variable in

54 Statistix 10 User’s Manual


the regression analysis. The transformed variable is added as an independent
variable in the regression analysis to test for a power transformation (see Weisberg
for an example).

Capitalize (s) Capitalizes the first letter of each word in the string s.

Case Case index. Case indicates the position of a value in the data set. If the data set is
thought of as a rectangular table of numbers, the variable names identify the
columns and the case indices identify the rows. Case is incremented for cases that
are omitted. The Selcase function discussed on page 59 skips omitted cases.

Cat (i, j) Categorical index generator. This function generates index values. The integer
argument i gives the number of categories in the index, and the integer argument j
is the repeat factor. This function generates the numbers 1 through i, repeating
each value j times. First j 1’s are generated, then j 2’s, etc., up to j i’s. After i x j
values are generated, the process repeats. Both i and j must be specified as
positive integers.

Some examples are shown below. The example assumes that there are 12 cases in
the data set, and all have been selected.

Y = Cat (3,2) Y : 1,1,2,2,3,3,1,1,2,2,3,3

Y = Cat (3,3) Y : 1,1,1,2,2,2,3,3,3,1,1,1

Y = Cat (4,3) Y : 1,1,1,2,2,2,3,3,3,4,4,4

Y = Cat (5,1) Y : 1,2,3,4,5,1,2,3,4,5,1,2

Y = Cat (2,4) Y : 1,1,1,1,2,2,2,2,1,1,1,1

Although this may initially appear to be a rather simple-minded function, it’s


extremely useful for creating categorical variables needed for numerous types of
analyses. For example, suppose you have data from a randomized block design
with 4 blocks and 3 treatments applied within each block. Then suppose your data
are ordered such that the 3 observations for block 1 came first, followed by the 3
observations for block 2, etc. To create your block index, you would specify:
Block = Cat (4,3). The treatment index is created as: Treat = Cat (3,1).

Note that if the data had been ordered such that the 4 values for treatment 1 came
first, followed by the 4 values for treatment 2, etc., the indices would be generated
as Block = Cat (4, 1) and Treat = Cat (3, 4).

Ceiling (x) The smallest integer greater than or equal to x. For example, Ceiling(1.23) is 2.

Copy (s,i,j) Copies j characters of text from the string s starting at the i-th character.

Chapter 2, Data Menu 55


Cos (x) Cosine of x. Units of x are assumed to be radians. Angles can be converted from
degrees to radians as: Radians = Degrees / 180 ( Pi.

Cosh (x) Hyperbolic cosine of x.

Count (x) Number of usable cases of x. This function returns the total number of usable
cases (i.e., selected and not missing). It’s not a running counter; please see Case
above for such a function.

CumSum This function is two functions in one. When used with a single argument x, it
(x[,k]) computes the running sum of x. The value it returns for the i-th case is the sum of
the first i cases of x. When used with the two arguments x and k, it computes the
decision interval cusum. The value returned for the i-th case Si is defined as
Si = max (0, Si!1 + xi ! k).

Date (s) Converts the string value s to a date value. This function is used to convert string
variables to date variables.

Day (x) Day of month. The argument must be a date.

DayofWeek (x) Returns an integer representing the day of week (1 = Sunday, 2 = Monday, etc.).
The argument x must be a date.

Delete (s,i,j) Deletes j characters of text from the string s starting at character index i.

Diff (x, i) Difference of x and x lagged by i cases. The value of this function for the j-th case
is x at case j minus x at case j!i. The argument i may be omitted, in which case i =
1 is assumed. See also Lag (x, i).

Factorial (x) Computes x!

Floor (x) The largest integer less than or equal to x.

Exp (x) Exponentiation; e raised to the power x.

GeoMean (x) Geometric mean of the column x.

Insert (s1,s2,i) Inserts the string s1 into the string s2 at character index i.

Lag (x, i) Lag x by i cases. The value of this function for the j-th case is the (j - i)-th case of
x. Cases 1 up to i receive the value missing. Only selected cases are treated. If
the (j - i)-th case of x is missing, Lag will return the value missing for the j-th case.
The integer lag factor i is optional; it defaults to 1 if omitted.

Length (s) Computes the length (number of characters) in the string s.

Ln (x) Natural (base e) log of x.

56 Statistix 10 User’s Manual


Log (x) Base 10 log of x. Use the Power function discussed on the next page to compute
antilogs (e.g., Power (10, x)).

LowCase (s) Converts the string s to all lowercase.

M Missing value indicator. This can be used to assign a variable a missing value
(e.g., Var = M). It can also be used to test a variable for missing data (e.g., If Var
= M).

Max (x) Maximum value of x over all selected cases.

Mean (x) This function returns the mean of x over the selected cases.

Median (x) This function returns the value of the median for x over all selected cases.

Min (x) This function returns the minimum value of x over all selected cases.

Modulo (x, y) Computes the modulus of x by y (the remainder of x divided by y). x and y may be
expressions but must have integer values. Example: Modulo (12, 5) = 2.

Month (x) The month of the year. The argument x must be a date.

Normalize (x) Normalize scales x such that the sum of all values of x equals 1.

Number (s) Converts the string value s to a number. This function is used to convert string
variables containing numeric strings to real or integer variables.

Percentile (x) Percentile of x. Suppose you have a variable Score that contains the test scores for
all students in a class. The transformation PScore = Percentile (Score) creates a
new variable PScore that contains the percentiles of each test score.

Pi The constant π, 3.1415926.

Pos (s1, s2) Returns the starting position of the string s1 where it appears in the string s2. If
the string s1 does not appear in the string s2, zero is returned. Example: Pos
(“def”, “abcdefg”) = 4.

Power (x, y) Raises the value x to the power y. Produces the same result as x ^ y. This function
is defined for nonnegative values for x and for negative values for x when y is a
positive whole number.

Random (dist, Generates a random number from one of several probability distributions. The
a,b) first parameter dist is the name of the probability distribution. You can abbreviate
the distribution name using enough of the first characters to distinguish it from the
other distributions. The second and third parameters are specific to the
distribution:

Chapter 2, Data Menu 57


Random(beta, a, b) generates a random variate from the beta distribution with
shape parameters a and b (a > 0, b > 0).

Random(binomial, n, p) generates a random variate from the binomial distribution.


n = number of trials, p = probability of a success.

Random(chisquare, df) generates a random variate from the chi-square distribution


with df degrees of freedom.

Random(exponential, x) generates a random variate from the exponential


distribution with rate parameter x (x > 0).

Random(F, ndf, ddf) generates a random variate from the F distribution with
numerator degrees of freedom ndf and denominator degrees of freedom ddf.

Random(gamma, a, b) generates a random variate from the gamma distribution


with shape parameter a and scale parameter b (a > 0, b > 0).

Random(normal, m, sd) generates a random variate from the normal distribution


with mean m and standard deviation sd.

Random(permutation, i) generates a random permutation of the integers from 1 to


i.

Random(Poisson, m) generates a random variate from the Poisson distribution


with mean parameter m.

Random(uniform, low, high) generates a random variate from the uniform


distribution in the range low to high.

Uniform random numbers are generated using the Mersenne twister algorithm
(Matsumoto et al., 1998), and is the basis for the other distributions (Ahrens, and
Dieter, 1973; Ahrens and Dieter, 1974; Ahrens and Dieter, 1982; Cheng, 1978;
Kachitvichyanukul and Schmeiser, 1988).

Rank (x) Ranks of x. The value of this function for the I-th case of x is the rank of that case.
If some cases are tied, these cases receive the appropriate average rank.

Round (x) Rounds x to the nearest whole integer number.

RowCount Counts the number of nonmissing values in the variable list for each case. As with
(v1..vn) the other row functions described below, the variable list can include Var1 .. Var99
syntax and the keyword All.

RowMax The maximum value among the variables listed for each case.
(v1..vn)

58 Statistix 10 User’s Manual


RowMean The mean of the variables listed for each case. The mean is computed ignoring
(v1..vn) missing values.

RowMedian The median of the variables listed for each case. The median is computed ignoring
(v1..vn) missing values.

RowMin The minimum value among the variables listed for each case.
(v1..vn)

RowSD (v1..vn) The standard deviation of the variables listed for each case.

RowSum The sum of the variables listed for each case. The sum is computed ignoring
(v1..vn) missing values.

SD (x) Standard deviation of x. This function returns the sample standard deviation for
the column x. This is the so-called unbiased estimate of the standard deviation; the
divisor is the square root of n - 1, where n is the number of usable cases.

SelCase Selected case index. Selcase indicates the position of a value in the data set with
respect to the selected cases. If the data set is thought of as a rectangular table of
numbers, the variable names identify the columns and the case indices identify the
rows. Selcase is not incremented for cases that are omitted. Selcase ranges from
one to the maximum number of selected cases. Case counts all cases regardless of
their omit status.

Sin (x) Sine of x. Units of x are assumed to be radians.

Sinh (x) Hyperbolic sine of x.

Sqr (x) Squares the value of x.

Sqrt (x) Square root of x.

String (x[,i]) Converts a number or date to its string equivalent. The optional constant i
specifies the number of decimal places when converting numbers.

Studentize (x) Studentizes x. A variable is studentized by subtracting the sample mean from the
original values and dividing the deviations from the sample mean by the sample
standard deviation. Once x has been studentized, it’ll have a mean of zero and a
standard deviation of one.

Sum (x) This function returns the sum of all selected cases for x.

Tan (x) Tangent of x. Units of x are assumed to be radians.

Tanh (x) Hyperbolic tangent of x.

Chapter 2, Data Menu 59


Trunc (x) Truncates the decimal portion of a number. For example, 1.98 when truncated
becomes 1.0.

Unitize (x) Scales x so its vector norm equals one. The norm, or length, of a vector is the
square root of the sum of the squares of the elements of the vector.

UpCase (s) Converts the string s to all uppercase.

Variance (x) This function returns the sample variance for x.

Year (x) Year of a date. The argument x must be a date.

ZInverse (x) Inverse of the standard normal distribution. If the value of the argument x is
between 0 and 1, ZInverse returns the inverse of the standard normal distribution.
That is, the value returned is the z value (standard normal value) for which the
probability of a smaller value is the value of x.

ZProb (x) The standard normal probability of x. This function returns the probability of a
value smaller than x from a standard normal distribution. In other words, this
function returns the lower tail probability.

60 Statistix 10 User’s Manual


C H A P T E R

3
File Menu
Statistix offers flexible and easy-to-use file management procedures. These
procedures are used to manipulate data files stored on fixed disks and removable
media. You’ll use the file management procedures to open and save data files,
import data created by other programs, and view files without leaving Statistix.

Data that you create using Statistix are temporary. Data that you enter, import, or
create using transformations aren’t stored on disk until you explicitly create a disk
file to permanently store your Statistix data.

The New procedure is used when you want to create a new Statistix data file. It
closes the current data set and displays an empty spreadsheet.

The Save and Save As procedures are the usual methods of saving Statistix data.
These procedures create a high speed compact binary file representation of a
Statistix data set. Statistix data files have the file name extension “.sx”.

The Open procedure is used to retrieve Statistix data files created previously using
the Save and Save As procedures.

The Merge procedures combines data from your active data set and a second data
set stored in a Statistix data file. There are three separate procedures to merge (1)
cases, (2) variables, and (3) labels, transformation expressions, and dialog box
settings.

The Summary File procedure is used to create a new Statistix data file containing
summary statistics of the active data set.

The Log File procedure is used to start recording a log of the Statistix procedures
you will perform.

The Import procedures are used to read data from text files and Excel files into
Statistix. The Export procedures are used to create text or Excel versions of your
current Statistix data set so that the data can be accessed by other programs.

The View procedure is used to view the contents of text, Excel, graph, or Statistix
files.

The File Info procedure produces a report that lists details about the open file,
such as, variable names, variable types, column formats, variable labels, and value

Chapter 3, File Menu 61


labels.

The Print procedure is used to print the contents of the active page. The active
page can be the data set page, a report, or a plot.

New
The New procedure is used when you want to start a new Statistix data set. It
closes the open data set, if any, and displays an empty spreadsheet. You then add
variables to the new data set using either the Import or Insert Variables command.

If you have not saved the active data set since it was last modified, a warning
message appears on the screen. This warning gives you an opportunity to save the
active data set before erasing it from memory.

Open

The Open procedure is used to read a Statistix data file previously created using
the Save, Save As, or Summary File procedure. The file data set becomes the
active Statistix data set and is displayed on the spreadsheet. This procedure can’t
be used to open data stored in formats other than the Statistix format. Use the
Import procedures described later in this chapter to import data from text and Excel
files.

Statistix can only have one data set open at a time. If you already have a data set
open, it will be replaced with the data contained in the file you open. If you
haven’t saved the active data set since it was last modified, Statistix will warn you
and give you a chance to save it.

The Open dialog box is similar to the open dialogs of other Windows applications.
The appearance depends upon the version of Windows you’re using. An example
appears on the next page.

Your cursor starts at the File Name box. You can enter the name of the file you
want to open, and then press the Open button to open the file.

You can also select a file from the file list. Double click on the name of the file
you want to open. Use the scroll bar to scroll down the list if the file you want
isn’t visible. You can navigate to a different folder using the panel on the left side,
or the navigation bar located up top.

62 Statistix 10 User’s Manual


Reopen
Selecting Reopen from the file menu displays a list of recently used Statistix data
files. Selecting one of the files opens the file and makes it your current data set.

The down-arrow button that appears to the right of the open icon on the toolbar
displays the same list of recently used files.

Sample Data
The data sets that are used to illustrate statistical procedures in this manual are
installed on your computer in a folder named Sample Data. You can open one of
these data files by selecting Sample Data from the file menu. The standard open
dialog appears listing the sample data files. Double-click on the name of the file
you want to open.

Chapter 3, File Menu 63


Save
Statistix data files are ideal for saving your Statistix data for future Statistix
analyses. All information about your Statistix data is preserved, including variable
names, case omit status, missing values, value labels, and Statistix dialog box
settings. The Save takes a snapshot of the data set’s present state for future use.
Statistix data files store the data in a compact binary format—these files can be
read and written rapidly.

The current file name of your active data set is displayed in the spreadsheet’s title
bar and in the spreadsheet tab. If you’ve just created a new data set, then the word
“Untitled” appears on the spreadsheet tab.

If your data set is untitled when you select the Save procedure from the menu, the
Save As dialog box described below is displayed and you’ll be asked to enter a file
name. If your data set already has a file name, then that file is updated with the
current state of your data.

Use the Save As procedure described below if you’d like to save a named data set
but with a different file name.

Note: To save a report or plot, click on the save icon on the toolbar.

Save As
Use the Save As procedure to save your Statistix data for future Statistix analyses.
All information about your Statistix data is preserved, including variable names,
case omit status, missing values, value labels, and Statistix dialog box settings.

The current file name of your active data set is displayed in the spreadsheet’s title
bar. If you’ve just created a new data set, then the word “Untitled” appears on the
spreadsheet tab.

Use the Save As procedure to save an untitled data set for the first time, or to save
a titled data set using a different file name. Use the Save procedure discussed
above if you want to update the file using the name that appears on the spreadsheet
title bar.

The Save As dialog box has a File name box where you can enter the name of the
new file. There’s a file list that lists the Statistix files in the current folder. You
can navigate to a different folder in the usual manner.

Your cursor starts at the File name box. You can enter the name of the file you

64 Statistix 10 User’s Manual


want to create, and then press the Save button to save the file. If you don’t specify
a path, the file is saved in the folder displayed. The “.sx” file name extension will
be added for you.

More often than not you’ll want to use a new file name when using the Save As
procedure. But you can select a file from the file list. Double click on the name of
the file you want to use to save your data. Use the scroll bar to scroll down the list
if the name you want isn’t visible.

Statistix 10 can open data files created by earlier versions of Statistix. However,
older versions of Statistix can’t open Statistix 10 data files. Click on the Save as
type drop down box to select an older file format. Statistix 4 data files can be
opened using Statistix 4.0 and later versions (including Statistix for Windows 1 and
Statistix for Windows 2). Statistix 7 data files can be opened using Statistix 7.0 and
later versions. Statistix 8-9 data files can be opened using Statistix 8.0 and later
versions. Saving data using an older format will result in the loss of some
information. In particular, Statistix 10 is the first version to support Unicode
characters in variable names and string variables. If you have non-ANSI characters
in your data, they will lost when saving files using an older format.

Note: To save a report or plot, click on the save icon on the toolbar.

Chapter 3, File Menu 65


Merge Cases
The Merge Cases procedure is used to combine the data of your active data set
with data stored in a second Statistix data file. Cases of data stored in a file are
appended to the existing variables in the active data set.

There are two steps to use this procedure. The first step is to select the merge file
using the standard open dialog.

Once you’ve specified the name of the merge file (Ht&Wt.sx in our example), a
second dialog box is displayed, as shown below.

The variables in your current data set are listed in the Variables list. The variables
found in the merge file are listed in the Merge File Variables box. The active data
set has three variables named Age, Height, and Weight. The file Ht&Wt has three
variables named Gender, Height, and Weight. Only cases of variables that match
variable names in your current data set can be appended (Height and Weight in this
example).

Press the OK button to start the merge. Once the merge is completed, the data for
the Ht&Wt file variables Height and Weight will be added at the bottom of the
spreadsheet to the active data set variables Height and Weight. Since the Ht&Wt
variable Gender does not match any variables in the current data set, its data will
be ignored.

Merge Variables
The Merge Variables procedure is used to combine the data of your active data set
with data stored in a second Statistix data file. Variables of data stored in a file are
added to the existing variables in the active data set.

66 Statistix 10 User’s Manual


This procedure is best explained with an example. There are two steps. The first
step is to select the merge file using the standard open dialog.

Once you’ve specified the name of the merge file (we’ll use the same Ht&Wt file
for this example), a second dialog box is displayed, as shown below.

The variables in your current data set are listed in the Variables list. The variables
found in the merge file are listed in the Merge File Variables box. Select the
variables you want to merge from the Merge File Variables box and move them to
the Add Variables box. Only variables with names that don’t match variables in
your current data set can be selected (Gender is the only such variable in this
example).

Press the OK button to start the merge. Once the merge is completed, there’ll be
four variables in the current data set—Age, Height, Weight, and Gender.

Merge Labels, Transformations, Etc.


A Statistix data file stores a table of data consisting of numbers, dates, and strings.
But it also stores other useful information that you may want to share between
separate data files: variable names, variable labels, value labels, dialog box
settings, transformation expressions, and omit expressions. The Merge Labels,
Transformations, Etc. procedure is used to merge these kinds of data into an
active data set. Unlike the Merge Cases and Merge Variables procedures
discussed on the preceding pages, this procedure can also be used to import data
into a new data set. In this way, any Statistix data file can be used as a template to
create a new data file with the same structure, but with a new table of values.

There are two steps to using this procedure. The first step is to select the merge
file. The first step is to select the merge file using the standard open dialog.

Once you’ve specified the name of the merge file, a second dialog box is

Chapter 3, File Menu 67


displayed, as shown below.

There are four check boxes on the dialog box, one for each kind of data that can be
merged into the active data set using this procedure. Simply check the boxes for
the types of data you want to merge.

You can merge variable names into an active data set with other variables, or with
an empty data set that doesn’t have any variables yet. The case data for merged
variable names are ignored.

When merging variable labels and value labels, labels for any variables in the
merge file that match the names of variables in the active data set replace the
original labels, if any, in the active data set.

Dialog box settings refer to the values of edit controls, list boxes, check, boxes,
etc. that appear on data, file, and statistics dialog boxes.

Lists of transformation and omit cases expressions performed on a data set are
saved along with the rest of the data when you use the Save procedure to create a
Statistix data file. These expressions can be reused by the Transformations and
Omit/Select Restore Cases procedures. By merging these lists from one data file
into the active data set, you can reuse these expressions without having to retype
them.

Summary File
The Summary File procedure is used to create a new Statistix data file containing
summary statistics of your active data set. Summary statistics, such as the mean
and standard deviation, can be computed for selected variables broken down by
one or more categorical variables. The file will contain one case for each unique

68 Statistix 10 User’s Manual


combination of values for the categorical variables. The current data set isn’t
modified by the procedure. Use the Open procedure to retrieve the summary
statistics for further processing.

Don’t confuse this procedure with the Save procedure, which is used to save an
exact copy of your data set.

Specification You must first specify a name for the file you want to create. You enter the name
of the file you want to create in a standard Windows save dialog box. Once you’ve
specified the file name and pressed the Save button, you’re presented with the
Summary File dialog box you’ll use to specify the details about the new file, as
shown below.

First select the Categorical Variables. The categorical variables contain discrete
values that are used to identify groups. The categorical variables can contain
numbers, dates, or strings. These variables automatically become variables in the
new data file. To select the categorical variables, first highlight the variables you
want in the Variables list, then press the right-arrow button next to the Categorical
Variables list box.

The next step is to define summary variables. The summary variable definition has
three parts: the summary variable name, a statistical function name, and the
existing variable used to compute the new variable. Enter a new name in the New
Variable box. Select a function name from the Function pull-down list. Select
the Original Variable: Highlight a variable in the Variables list box, then press the
right-arrow button next to the Original Variable box to copy the variable name.
Once you’ve completed all three parts, press the right-arrow button next to the
Summary Variables list to add the summary variable definition to the list.

The example dialog box above shows the definition for the summary variable
SumRoses. The variable SumsRoses is a new variable that’ll be included in the

Chapter 3, File Menu 69


summary file. It will be computed using the Sum function based on the current
data set variable Roses.

There are nine functions that can be used with the Summary File procedure:
N number of observations in the group with non-missing values
Missing number of observations in the group with missing values
Mean mean
SD standard deviation
Min minimum value
Max maximum value
Sum sum
Variance variance
SE standard error of the mean

Data There must be at least one and no more than five categorical variables. Numeric
Restrictions values of categorical variables will be truncated to whole numbers. String values
of a categorical variable will be truncated to 20 characters.

Example To illustrate the summary file procedure, consider the data from a split block
design where the number of saleable roses was counted (Bingham and Fienberg,
1982). Five treatments were applied in two replicates.

Case Roses Blk Trt Within Case Roses Blk Trt Within
1 102 1 1 1 11 71 2 1 1
2 M 1 1 2 12 79 2 1 2
3 84 1 2 1 13 76 2 2 1
4 81 1 2 2 14 M 2 2 2
5 67 1 3 1 15 74 2 3 1
6 83 1 3 2 16 M 2 3 2
7 71 1 4 1 17 51 2 4 1
8 M 1 4 2 18 63 2 4 2
9 53 1 5 1 19 63 2 5 1
10 M 1 5 2 20 61 2 5 2

Please refer to the Summary File dialog box on the preceding page. The variables
Blk and Trt have been selected from the Variables list and copied to the
Categorical Variables list box. These two variables will be included in the
summary file.

Two summary variables were defined and are listed in the Summary Variables list
box. The first variable NRoses is computed using the N function with the variable
Roses as the argument. It will contain the number of observations per category for
Roses. The second variable SumRoses is computed using the Sum function and
the variable Roses. It will contain the group sums of Roses. The resulting data file
RosesStat.sx contains the following data:

70 Statistix 10 User’s Manual


Case Blk Trt NRoses SumRoses
1 1 1 1 102
2 1 2 2 165
3 1 3 2 150
4 1 4 1 71
5 1 5 1 53
6 2 1 2 150
7 2 2 1 76
8 2 3 1 74
9 2 4 2 114
10 2 5 2 124

Log File
A log file is a text file that lists the procedures performed during a Statistix session.
Log files are particularly useful for verifying that a series of transformations or
omit cases statements were performed as intended. A log file can be viewed and
printed using the Statistix View Text File procedure during a Statistix session to
review the work performed. Each procedure is date- and time-stamped so that log
file entries can be matched with printed output.

To start a log file, select the Log File procedure and enter a file name. If you don’t
enter a file name extension, the extension “.log” will be added to the file name.

If you enter the name of a file that already exists, you can choose to have new
entries appended to the existing file or you can choose to replace the old file with a
new log file.

Once you’ve started a log file, it continues to record commands until you either
start a new log file, or until you exit Statistix. The example log file below lists the
procedures used during a short Statistix session.

Log File, 05/15/12, 15:46


C:\Users\Gerard\Documents\agestudy.log
Open, 05/15/12, 15:46
C:\Users\Gerard\Documents\cholesterol.sx
Transformations, 05/15/00, 15:47
ageclass = 10 * Trunc(age / 10)
Transformations, 05/15/12, 15:47
conclass = 40 * Trunc(conc / 40)
Print, 05/15/12, 15:48
Conc,AgeClass,Conc,ConcClass,State
Histogram, 05/15/12, 15:49
Conc
Normal Curve
Cross Tabulation, 05/15/12, 15:50
AgeClass,ConcClass
Save, 05/15/12, 15:52
C:\Users\Gerard\Documents\cholesterol.sx

Chapter 3, File Menu 71


The first entry in the log file shows when the file was started and gives the name of
the log file. The remaining entries list the activities that followed: a Statistix file
named Cholesterol.sx was opened, two transformations were performed, the data
were printed, a histogram was displayed, and a cross tabulation report was
obtained. Finally, the modified data set was saved.

Import Excel
The Import Excel procedure is used to import data from Excel workbook files
(with file name extensions .xls or .xlsx). The import procedure adds variables to a
empty or existing Statistix spreadsheet. Each column imported from the file
becomes a new variable in your Statistix data set.

An alternative method of importing data is to paste data from the Windows


clipboard directly into the Statistix spreadsheet. See Chapter 2 for details.

The first step in importing data is to select an input file. The standard open dialog
box lists the Excel workbook files available. Once you’ve selected a file, the
Import Excel dialog appears as shown below. The contents of the Excel file are
displayed at the top of the dialog box. You can use the scroll bars to view different
portions of the file. If the file you select has more than one sheet, the sheet names
appear in tabs below the rows of data. Click on a sheet-name tab to import data
from that sheet. You can only import data from one sheet at a time.

You can import the entire file, select a subset of columns, or select a rectangular

72 Statistix 10 User’s Manual


array of data. To select one column, click on the column heading for that column.
To select a range of columns, drag the mouse over the column headings. To select
two or more columns that aren't contiguous, click on the column headings while
holding down the control key. You can also select an arbitrary range of cells.
Drag your mouse from the top-left cell to the bottom-right cell of the rectangle you
want to import. To import the entire file, don't select any columns or cells (to
cancel a selection, click on a single cell).

Select one of the Variable Names radio buttons. You must indicate whether you
want to have the variable names imported from the file, have names generated
automatically, or enter the names manually. Select a method by clicking on the
appropriate radio button.

If you select Read Names From File, the first row of the spreadsheet file is
scanned for legal variable names. If the text in the cell used for the variable name
includes a data type (I for integer, R for real, D for date, S for ANSI string, and U
for Unicode string), then the resulting variable will be the data type specified. The
letter representing the data type follows the variable name inside parentheses.
String types require a number after the letter S or U to indicate the maximum
length (e.g., Student(S15)). If the data type isn’t specified, the variable type is
determined automatically by data found on the first rows of the input file.

If you select Generate Names, then the standard one- or two-letter column
headings (A, B, C, . . . AA, AB, AC, etc.) are used for variable names. In this
case, the actual data should start in the first row of the file.

If you select the Enter Names Manually method, you must enter a list of variable
names in the Import Variable Names box, one name for each column of data you
want to import. You can specify the data types of the variables when you list the
variable names. Use the letters I, R, D, S, and U in parentheses after the variable
names to identify integer, real, date, ANSI string, and Unicode string types. String
types require a number after the letter S or U to indicate the maximum length. If
you don’t specify a data type, real is assumed.

One common problem associated with importing data from Excel files is that
variables are sometimes assigned an inappropriate data type (real, date, or string).
In particular, variables that should be assigned the real type are assigned the string
type. Sometimes this happens when there are extra headers in the files, and other
times it’s caused by blanks appearing in a column. There are several approaches to
working around this problem. If you’re reading variables names from the first line
in the file, try explicitly declaring the data type as part of the variable names (e.g.,
change Conc to Conc(R)). Another approach is to select Enter Names Manually
and explicitly declare the data types in the variable name list. A third approach is
to change the data types of variables one at a time using transformations (e.g.,
Conc(R) = Number(Conc)).

Chapter 3, File Menu 73


Import Text File
The Import Text File procedure is used to import data from text files (common
file name extensions include .txt, .rtf, and .csv). The import procedure adds
variables to a empty or existing Statistix spreadsheet. Each column imported from
the file becomes a new variable in your Statistix data set.

The first step in importing data is to select an input file. The standard open dialog
box lists the text files available. Once you’ve selected a file, the Import Text File
dialog appears as shown below. The contents of the file you selected are displayed
at the top of the dialog box.

The import procedure is used to add variables to a new or existing data set. You
must provide a valid variable name for each column of data you want to import.
Use the Variable Names radio buttons to indicate where you want Statistix to find
the variable names. Select Read Names From File if the text file you’re
importing has variable names on the first line of text. Select Generate Names to
have Statistix generate variable names starting with V001. Select Enter Names
Manually if you want to type a list of variable names in the Import Variable
Names text box.

In Statistix, you can use integer, real, date, and string data. However, a particular
column can only be used to store one type of data. You can specify the data type
of variables when you list the variable names. Use the letters I, R, D, S, and U in
parentheses after the variable names to identify integer, real, date, ANSI strings,

74 Statistix 10 User’s Manual


and Unicode string types. String types require a number after the letter S or U to
indicate the maximum length. In the example dialog box on the preceding page, -
the variable Name is a string variable with a maximum length of 15. The variable
BirthDate is a date variable. The entry for the variable Conc does not declare a
data type, so it’s assigned the real data type. These data type rules apply to varia-
ble names entered manually in the dialog box or read directly from the input file.

You can use the Var1 .. Var99 syntax in the variable list to abbreviate a long list of
variables. Specify the data type for the entire list of variables by entering the data
type at the end of the list (e.g., Q1 .. Q15(I)).

The Format Statement box is required when the fields of the import file aren’t
delimited with comma, spaces, or tabs. The format statement is used to indicate
where the data for each variable begins and ends. The format statement is
discussed in more detail below.

Statistix interprets the letter M and a period as a missing value. You can enter an
additional string (e.g., 999 or N/A) in the Alternative Missing Value Indicator
field to flag missing values.

Tab- A tab-separated text file is a text where columns of data on each line of text is
Separated separated by a tab character. One line of text corresponds to one case in Statistix.
Files Quotation marks aren’t required for string data. The Format Statement isn’t used
for this type of file.

Comma and A “Comma and Quote” text file is a particular text format made popular by
Quote Files spreadsheet and database programs. In a comma and quote file, columns of data
are separated by commas or spaces. String data, such as a person’s name, are
enclosed in quotation marks (“” or ‘’). One line of text corresponds to one case in
Statistix. The dialog box on the preceding page shows an example of a comma and
quote file. The Format Statement isn’t used for this type of file.

Format The Format Statement in the Import Text File procedure is used to import columns
Statement of data from a text file when the data are arranged with fixed field widths. We call
files of this type a formatted file to distinguish them from tab-separated files and
comma and quote files. The format statement is used to specify the exact locations
of data for each input variable.

The use of the format statement can be tedious and should be avoided whenever
possible. The reasons for using a format statement include:
• Columns of data aren’t comma, tab, or space delimited.
• String data aren’t enclosed in quotation marks.
• You want to skip unwanted columns of data.
• The data for one case requires more than one line in the input file.

Chapter 3, File Menu 75


Consider the formatted file below.

Fisher, E 11/02/39IA 228


Haley, J 01/10/53IA 182
Hickman, D 10/23/26IA 249
Johnson, M 01/01/38IA 259
Sellman, G 10/20/20IA 224
Thorn, J 11/16/33IA 189
Baur, J 12/24/73NE 137
Christianson, E 10/23/47NE 173
Farrow, C 10/14/58NE 177
Greer, R 11/28/13NE 241
Keller, G 12/13/40NE 225

The strings in the first column of data aren’t enclosed in quotation marks. Also,
there aren’t any delimiting characters between the column of dates and the two-
letter state abbreviations. A format statement is required to indicate where each
column begins and ends.

The Import Text File dialog box to import this file is displayed below. The format
statement is:
A15 3X A8 A2 F5

Each variable requires a format specification. A single format specification


consists of a letter followed by a number indicating the field width. There are five
format specifications that can be used for variable data. The A format (A for
automatic) can be used for string, date, and numerical variables. There are four
additional format specifications that can be used for integer and real variables: D -
Decimal format, E - Exponential format, F - Fixed format, and I - Integer format.
The differences between these types are more important when used with the Export
and Print procedures discussed later in this chapter. Any integer or real variable
can be imported using the F format.

76 Statistix 10 User’s Manual


In all cases, enter a variable’s format specification by typing the letter followed by
a number representing the total field width for the variable. The field width can
include blank spaces before and after the actual field of data.

In the example above, the format specification A15 is used for the first variable
Name, indicating that the first 15 characters of each input line contains the string
value for that variable. The format specification 3X is used in the example to skip
three spaces on the input line between the data for the variables for Name and
BirthDate. The last format specification—F5 for the variable Conc—includes two
characters for the leading spaces and three characters for the three digit numbers.

The F format can be used to insert a decimal point into a column of numbers at a
specific position. Suppose that the column of data for the variable Conc was
entered as ten times the actual value, such that the number for the first case 228
represented the value 22.8. By specifying the format F5.1 instead of F5, a decimal
point would be inserted one position from the right side.

If field widths repeat, we can use a shorthand notation rFw to reduce the size of the
format statement. For example, the format F5.1 F5.1 F5.1 can be abbreviated as
3F5.1.

A repeat factor can be applied to a list of formats inside parentheses too. For
example, the format statement A8 I2 F8.2 A8 I2 F8.2 can be abbreviated using the
statement 2(A8 I2 F8.2).

The X format is used to skip spaces between variables. The general format is rX
where r indicates the number of spaces to skip. For example, we’d use 15X to skip
the first 15 characters of the input line and import only the variables BirthDate,
State, and Conc.

Long records are sometimes split into several lines in an input file. Suppose you
wanted to read 17 variables from a file, but the first ten variables were listed on
one line and the last seven variables were listed on the following line. You would
then need to use the / character in the format statement to show where the line
break occurs:
10F8 / 7F8

Importing a If a file contains the data for a single variable, you can use the “single” option to
Single read all the data items, regardless of how many columns there are in the file. Just
Variable enter the word “single” in the space provided for the input format statement. The
values will be read left to right, top to bottom.

Comment Statistix ignores any lines in a text file that begin with the characters “(”, “$”, or
Lines “#”. This lets you add comment lines to your text files.

Chapter 3, File Menu 77


Export Excel File
Statistix files can’t be accessed directly by other programs. Use the Export Excel
procedure to create files containing Statistix data that can be used by Microsoft
Excel and compatible programs. You must have Excel installed on your computer
to use this procedure.

The first step in exporting data is to specify the new file name. The standard save
dialog box appears and lists existing Excel files. Enter a new file name is the File
Name box, or select an existing file. Once you’ve specified an export file, the
Export Excel File dialog appears as shown below.

All you need to do now is to select the variables you want to export. Highlight one
or more variables in the Variables list, then press the right-arrow button to move
the highlighted variables to the Export Variables list. Press Crtl-A to highlight all
the variables. Press the OK to create the file.

The selected variables will appear in the new file in the order that you’ve placed
them in the Export Variables list. Variable names are listed in the first row of the
new file. Omitted cases aren’t exported.

An alternative method of exporting data to other programs is to use the Windows


clipboard. You can copy Statistix spreadsheet data to the clipboard and then paste
the data into another application.

Export Text File


A text file is a standard file format that is commonly used to transfer data between
different programs. The dialog box, shown below, appears after you specify an
export file name for a text file using the standard save dialog.

78 Statistix 10 User’s Manual


First select the variables you want to export. Highlight one or more variables in
the Variables list, then press the right-arrow button to move the highlighted
variables to the Export Variables list. Press Crtl-A to highlight all the variables.
The selected variables will appear in the new file in the order that you’ve placed
them in the Export Variables list.

Next select a File Format. A Tab Separated file uses the tab character to separate
columns of data. A Comma and Quote text file separates columns of data using
commas, and string data, such as a person’s name, are enclosed in quotation marks.
In a Fixed Format file, each column has a fixed width such that columns of data
line up vertically. String data aren’t enclosed in quotation marks and commas
aren’t used to separate columns. You have the option of specifying a Format
Statement that specifies the field widths and numeric formats for each variable.
The format statement is discussed in detail below.

Check the Export Variable Names check box to have the variables included on the
first line of the export file. Variable types for variables other than real variables
are included in the list of names.

Check the Export Value Labels to have value labels written to the file for numeric
variables for which you’ve defined value labels, rather than the numeric codes.

Format A format statement is a list of format specifications used to indicate how you want
Statement the data for each variable to appear. A variable format specification consists of a
letter, a field width, and sometimes a number for decimal places. If you don’t
enter a format statement, Statistix will construct a default format statement using
the Column Format information discussed in Chapter 2. By omitting the format

Chapter 3, File Menu 79


statement, the data are formatted more or less as it’s displayed in the spreadsheet
page. The default format guarantees at least one space between variables. The
different format specifications are listed in the table below.

General Example
Name Format Format Example Appearance Notes
Automatic Aw A10 John Smith
01/05/92
65
12.34567
Decimal Dw.s D11.5 12.345 s # w-4
3.4523E-03
Exponential Ew.s E10.4 1.234E+01 s # w-4
3.452E-03
Fixed Fw.d F7.2 12.34 d < w-2
0.003
Integer Iw I2 12
0
Space rX 10X inserts spaces
New line / / inserts line feed

In the table, w is the field width, s is the number of significant digits, d is the
number of digits to the right of the decimal point, and r is the repeat factor.

The A format can be used for variables of any data type. The D, E, F, and I
formats are used for integer and real variables, each resulting in a different numeric
format.

The A format displays numbers using an integer format when the number is a
whole number, or a decimal format when the number contains a fraction. For
numbers with a fraction, as many digits will be displayed as possible but trimming
any trailing zeros. The automatic format works well for data you’ve entered
manually because the numbers generally are displayed just as you’ve entered them.
For computed variables, such as variables created using the Transformations
procedure, the A format often displays nonsignificant digits.

The D format displays numbers using a decimal format where the decimal is free
to move about to maximize the number of digits displayed.

The F format displays numbers in a decimal format with a fixed number of


decimal places.

The E format displays numbers in exponential format, or scientific notation. A


number displayed in exponential format has two parts, the mantissa and the
exponent. The mantissa is a number displayed as a decimal number that’s always
greater than or equal to 1.0 and less than 10.0. The exponent is displayed using the
letter E followed by a signed integer. The number represented in this fashion is the
mantissa multiplied by 10 raised to the exponent. For example, the number
4.23E!02 is equal to 4.23 × 10!2, or 0.0423. This format is useful for very small
and very large numbers.

80 Statistix 10 User’s Manual


The I format displays numbers as a whole number. When a number that includes a
fraction is displayed using this format, the number is rounded off to the nearest
whole number.

The X and / aren’t used as format specifications for variables but are used to insert
spaces and new lines into the output record.

Any of the format specifications can have a number in front called the repeat
factor. This is used to abbreviate the format statement when several variables are
to be formatted in a similar manner. A repeat factor can also be placed in front of
a list of format specifications inside parentheses, as in:
3I5 2(I1 1X F4.2 F6.2) E10.4

When using format specifications, remember to make w large enough to account


for the minus sign for negative numbers and for extra space between variables.
String data are left justified, so you should always use the X format in front of a
format for a string variable to insert a space.

View Text File

This procedure is used to view a text file on the screen. There are a number of text
files you may be interested in viewing without leaving Statistix. You may want to
check the contents of a log file to refresh your memory about transformations
you’ve made. Or you may want to review a Statistix report file you’ve saved after

Chapter 3, File Menu 81


running an analysis of your data. You may even want to look at a text file you
want to import data from.

Once you’ve selected a file to view from the standard open dialog box, the file is
displayed in a new tab, as shown on the preceding page.

You can scroll forward and backward through the file using the scroll bar and the
page up and page down keys. You can print the file by clicking on the Print icon
on the toolbar.

View Graph File


This procedure is used to view a graph file created by Statistix or other programs.
Supported file formats include emf, wmf, bmp, jpg, png, and gif.

Once you’ve selected a file to view from the standard open dialog box, the file is
displayed in a new tab, as shown below.

The image is displayed in its original size and shape. Use the scroll bars to view
different portions of the image.

82 Statistix 10 User’s Manual


View Excel File
This procedure is used to view the contents of an Excel file. Excel must be
installed on your computer to use this procedure. Both xls and xlsx file formats
are supported.

Once you’ve selected a file to view from the standard open dialog box, the file is
displayed in a new tab, as shown below.

Use the scroll bars to view different portions of the file. You can highlight a
portion of the spreadsheet and then click on the copy icon to copy the selected
portion of the file to the Windows clipboard. Once on the clipboard, data may be
copied to the Statistix spreadsheet.

Chapter 3, File Menu 83


View Statistix File
This procedure is used to view the contents of a Statistix file. It’s different than the
Open procedure because the data can’t be modified and the data isn’t used for
analyses. You can, however, copy selected data from the view page to the
Windows clipboard.

Once you’ve selected a file to view from the standard open dialog box, the file is
displayed in a new tab, as shown on the next page.

Use the scroll bars to view different portions of the file. You can highlight a
portion of the spreadsheet and then click on the copy icon to copy the selected
portion of the file to the Windows clipboard. Once on the clipboard, data may be
copied to the main Statistix spreadsheet.

File Info
File Info is a report that provides basic information about the Statistix file you
currently have open. The report lists variable names, variable data types, column
formats, variable labels, and value labels. The report is first displayed in a page on
the screen, but can also be printed or saved in a file. An example report is shown

84 Statistix 10 User’s Manual


below.

File Name C:\SX\10.0\Cholesterol.sx


File Label Cholesterol concentration/age study of women
from two states

Variables 5
Selected Cases 30
Omitted Cases 0
Total Cases 30

Variable Data Type Format Variable Label/Value


Labels
Age Integer A 6 Age in Years
Conc Integer A 6 Cholesterol
Concentration
State Integer A 6
1 Iowa 2 Nebraska
AgeClass Real A 9
ConcClass Real A 9

The report is largely self-explanatory. The column formats are coded:


A-automatic, D-decimal, E-exponential, F-fixed, and I-integer. The column format
letter is followed by the column width and sometimes the number of decimal
places. See Column Formats in Chapter 2 for more information about column
formats.

Print
The Print procedure is used to print the contents of the active page. In the case of
Statistix reports and plots, the report or plot is printed. When the spreadsheet is the
active page, you print the data for a selection of variables. These procedures are
described below.

Print Report

Click on the Printer drop-down list to change the printer. Select the page

Chapter 3, File Menu 85


Orientation. Most Statistix reports are designed to fit on the page when using
portrait orientation with the Courier New 11 point font. You can change the values
for the Top Margin and Left Margin. Push the Reset button to reset all entries to
the default values. Push the OK button to print the report.

Print Plot

Click on the Printer drop-down list to select a printer. Select the page
Orientation. You can change the values for the Top Margin and Left Margin.
The bottom and right margins are set to the same values as the top and bottom
margins. Select a Size. If you select Lock Aspect Ratio, then the printed plot will
have the same relative width and height as it appeared on the screen. If you select
Full Page (only available for landscape), the plot will be stretched to fill the
printed page. Push the Reset button to reset all entries to the default values. Push
the OK button to print the plot.

Print Data

86 Statistix 10 User’s Manual


First select the variables you want to print. Highlight one or more variables in the
Variables list, then press the right-arrow button to move the highlighted variables
to the Print Variables list. Press Ctrl-A to highlight all the variables. The selected
variables will be printed in the order that you’ve placed them in the Print Variables
list. You can rearrange the variables by dragging them to a new position using
your mouse.

You have the option of entering a Format Statement that specifies the field widths
and numeric formats for each variable. If you omit the format statement, the data
are formatted more or less as it’s displayed in the spreadsheet page. The format
statement is discussed in detail on the next page.

Check the Print Case Numbers box if you’d like to have case numbers printed on
each line of the report.

If you’ve defined value labels for any of your variables, check the Print Value
Labels box to have the labels printed rather than the numeric codes.

You can specify the font size by entering a number in the Report Font Size box.
Typical values range from 10 to 12, but you can enter a smaller value to squeeze
more data on a page.

When you finish making your selections, you can press the Print button to send the
report directly to the printer. It’s often a better idea to press the Preview button
instead so you can look at the report on the screen to verify that the report is
formatted correctly. While previewing the report, you have the option of printing
the report, or saving it to a file. You’ll also get the opportunity to change the page
orientation and margins before printing.

Format A format statement is a list of format specifications used to indicate how you want
Statement the data for each variable to appear on the report. A variable format specification
consists of a letter, a field width, and sometimes a number for decimal places as
well. If you don’t enter a format statement, Statistix will construct a default format
statement for you using the Column Format information discussed in Chapter 2.
By omitting the format statement, the data are formatted more or less as it’s
displayed in the spreadsheet page. The default format guarantees at least one space
between variables.

The different format specifications are listed in the table on the next page. In the
table, w is the field width, s is the number of significant digits, d is the number of
digits to the right of the decimal point, and r is the repeat factor.

The A format can be used for variables of any data type. The D, E, F, and I
formats are used for integer and real variables, each resulting in a different numeric
format.

Chapter 3, File Menu 87


The A format displays numbers using an integer format when the number is a
whole number, or a decimal format when the number contains a fraction. For
numbers with a fraction, as many digits will be displayed as possible but trimming
any trailing zeros. The automatic format works well for data you’ve entered
manually because the numbers generally are displayed just as you’ve entered them.
For computed variables, such as variables created using the Transformations
procedure, the A format often displays nonsignificant digits.

General Example
Name Format Format Example Appearance Notes
Automatic Aw A10 John Smith
01/05/92
65
12.34567
Decimal Dw.s D11.5 12.345 s # w-4
3.4523E-03
Exponential Ew.s E10.4 1.234E+01 s # w-4
3.452E-03
Fixed Fw.d F7.2 12.34 d < w-2
0.003
Integer Iw I2 12
0
Space rX 10X inserts spaces
New line / / inserts line feed

The D format displays numbers using a decimal format where the decimal is free
to move about to maximize the number of digits displayed.

The F format displays numbers in a decimal format with a fixed number of


decimal places.

The E format displays numbers in exponential format, or scientific notation. A


number displayed in exponential format has two parts, the mantissa and the
exponent. The mantissa is a number displayed as a decimal number that’s always
greater than or equal to 1.0 and less than 10.0. The exponent is displayed using the
letter E followed by a signed integer. The number represented in this fashion is the
mantissa multiplied by 10 raised to the exponent. For example, the number
4.23E!02 is equal to 4.23 × 10!2, or 0.0423. This format is useful for very small
and very large numbers.

The I format displays numbers as a whole number. When a number that includes a
fraction is displayed using this format, the number is rounded off to the nearest
whole number.

The X and / aren’t used as format specifications for variables but are used to insert
spaces and line feeds into the output record.

Any of the format specifications can have a number in front called the repeat
factor. This is used to abbreviate the format statement when several variables are
to be formatted in a similar manner. A repeat factor can also be placed in front of

88 Statistix 10 User’s Manual


a list of format specifications inside parentheses, as in:
3I5 2(I1 1X F4.2 F6.2) E10.4

When using format specifications, remember to make w large enough to account


for the minus sign for negative numbers and for extra space between variables.
String data are left justified, so you should always use the X format in front of a
format for a string variable to insert a space.

Exit Statistix
The Exit command is used to exit Statistix. If your open data file has been
modified since you last saved it, Statistix warns you and gives you a chance to save
it before exiting.

Chapter 3, File Menu 89


C H A P T E R

4
Summary Statistics
These procedures are designed to help you condense, summarize, and display data.
Together with the Summary Plots discussed in the next chapter, you’ll use them
in the preliminary stages of analysis because they allow you to recognize general
patterns and they suggest directions for further analysis. They’re particularly
useful for detecting “unusual” values.

The Descriptive Statistics procedure displays a table of descriptive statistics, such


as mean and standard deviation, for a list of variables. The Frequency
Distribution procedure tabulates frequency tables, including counts and
percentages, for discrete and continuous data. A Cross Tabulation table displays
the frequencies and percentages for each combination of values of two or more
variables. The Breakdown procedure produces a table of descriptive statistics,
such as mean, and standard deviation, for one dependent variable broken down in a
nested fashion using the levels of up to five grouping variables. A Stem and Leaf
Plot is a frequency plot similar to a histogram where the digits of the data are used
to construct the bars of the plot. The Percentiles procedure is used to compute a
table of arbitrary percentiles for a list of variables.

These procedures are illustrated with example data from Snedecor and Cochran
(1980, p. 386). The data are the blood serum cholesterol levels and ages of 30
women, 11 from Iowa and 19 from Nebraska. The cholesterol concentrations are
in the variable Conc, and the ages are in Age. The variable State indicates the
state, with Iowa = 1 and Nebraska = 2. Two additional categorical variables were
created using Transformations. The variable AgeClass assigns the ages to ten-year
age classes. For example, if a woman’s age were within the range 50 to 59, the
value of AgeClass would be 50. AgeClass is created using the transformation
AgeClass = 10 ( Trunc(Age/10)

ConcClass is created in a similar manner, and assigns each case to a 40 mg/100 ml


cholesterol concentration class. It’s created as follows:
ConcClass = 40 ( Trunc(CONC/40)

For example, ConcClass is assigned the value 200 for any case for which the value
of Conc is in the 200 to 239 range.

These example data are listed on the next page. They’re also distributed with the
Statistix software in the data file Sample Data\Cholesterol.sx.

90 Statistix 10 User’s Manual


Age AgeClass Conc ConcClass State
46 40 181 160 1
52 50 228 200 1
39 30 182 160 1
65 60 249 240 1
54 50 259 240 1
33 30 201 200 1
49 40 121 120 1
76 70 339 320 1
71 70 224 200 1
41 40 112 80 1
58 50 189 160 1
18 10 137 120 2
44 40 173 160 2
33 30 177 160 2
78 70 241 240 2
51 50 225 200 2
43 40 223 200 2
44 40 190 160 2
58 50 257 240 2
63 60 337 320 2
19 10 189 160 2
42 40 214 200 2
30 30 140 120 2
47 40 196 160 2
58 50 262 240 2
70 70 261 240 2
67 60 356 320 2
31 30 159 120 2
21 20 191 160 2
56 50 197 160 2

Chapter 4, Summary and Descriptive Statistics 91


Descriptive Statistics
The Descriptive Statistics procedure produces a summary table of descriptive
statistics for a list of variables. You can select the statistics you want tabulated
from the following list: number of non-missing cases, number of missing cases,
sum, mean, standard deviation, variance, standard error of the mean, confidence
interval of the mean, coefficient of variation, median, minimum and maximum,
first and third quartiles, median absolute deviation, biased variance, skew, and
kurtosis.

Specification

Select the variables for which you want to compute descriptive statistics.
Highlight the variables you want to select in the Variables list box, then press the
right arrow button to move them to the Descriptive Variables list box. To
highlight all variables, click on the first variable in the list, and drag the cursor to
the last variable in the list.

The Grouping Variable is optional. If you select one, the summary statistics will
be tabulated separately for each value found in the grouping variable. You can
change the C. I. Percentage Coverage for mean confidence intervals. Select the
statistics you want reported by checking off the Statistics to Report check boxes.

Data The grouping variable can be of any data type. Real values will be truncated to
Restrictions whole numbers. Strings will be truncated to 20 characters.

Example The data are described on page 90. In the dialog box above, three of the five
variable names have been moved from the Variables list box to the Descriptive

92 Statistix 10 User’s Manual


Variables list box. No grouping variable was specified. The results are displayed
below.

Descriptive Statistics

Variable N Mean SD Minimum Maximum


Age 30 48.567 16.347 18.000 78.000
Conc 30 213.67 59.751 112.00 356.00
State 30 1.6333 0.4901 1.0000 2.0000

If you select more than six or seven statistics to report, the table is presented with
the variable names along the top. For example, the report below lists all of the
statistics available.

Descriptive Statistics

Age Conc State


N 30 30 30
Missing 0 0 0
Sum 1457 6410 49
Lo 95% CI 42.463 191.36 1.4503
Mean 48.567 213.67 1.6333
Up 95% CI 54.671 235.98 1.8164
SD 16.347 59.751 0.4901
Variance 267.22 3570.2 0.2402
SE Mean 2.9845 10.909 0.0895
C.V. 33.659 27.965 30.008
Minimum 18.000 112.00 1.0000
1st Quarti 37.500 180.00 1.0000
Median 48.000 199.00 2.0000
3rd Quarti 59.250 251.00 2.0000
Maximum 78.000 356.00 2.0000
MAD 10.000 27.500 0.0000
Biased Var 258.31 3451.2 0.2322
Skew -0.1009 0.6711 -0.5534
Kurtosis -0.7008 0.2762 -1.6938

The median absolute deviation (MAD) is the median value of the absolute
differences among the individual values and the sample median. See Snedecor and
Cochran (1980, pp. 78-81) for definitions of the other statistics.

Frequency Distribution
The Frequency Distribution procedure produces a frequency tabulation for
discrete or continuous data. It computes the frequency, relative frequency
(percentage of total), and cumulative and relative frequencies of data.

Specification Select the variables for which you want to display frequency tables. Highlight the
variables you want to select in the Variables list box, then press the right arrow
button to move them to the Frequency Variables list box. You can also move a

Chapter 4, Summary and Descriptive Statistics 93


variable by double-clicking its name. To delete variables from the Frequency
Variables list, highlight the variables you want to delete, then press the left arrow
button.

You can specify Low, High, and Step values to control the number of bins and the
width of each bin. The value you enter for low is the lowest value for the first bin.
The value you enter for high is the highest value of the last bin. The value you
enter for a step is the width of each bin. If you don’t specify these values,
frequencies are reported for each discrete value.

Select a Sort Method: Value, Ascending Frequency, or Descending Frequency.

Data Variables of any data type can be specified. There can be no more than 500 unique
Restrictions values for each discrete variable and no more than 500 bins if low, high, and step
values are specified.

Example The data are described on page 90. Frequencies are produced for the variable
AgeClass, which is the ten-year age class for each of the 30 female subjects, and
for State, which indicates the state where the subject resides.

The dialog box above illustrates what variables and options are selected. The
results are shown on the next page.

94 Statistix 10 User’s Manual


Frequency Distribution of AgeClass

Cumulative Cumulative
AgeClass Freq Percent Freq Percent
40 8 26.7 8 26.7
50 7 23.3 15 50.0
30 5 16.7 20 66.7
70 4 13.3 24 80.0
60 3 10.0 27 90.0
10 2 6.7 29 96.7
20 1 3.3 30 100.0
Total 30 100.0

Frequency Distribution of State

Cumulative Cumulative
State Freq Percent Freq Percent
Nebraska 19 63.3 19 63.3
Iowa 11 36.7 30 100.0
Total 30 100.0

The frequencies of each discrete value for a continuous variable, such as Conc in
our example data, are not of interest. For a continuous variable, you need to
establish intervals that span the range of the data and then count the number of
times data values fall within the bounds of the intervals. You do this by entering
low, high, and step values in the bottom of the dialog box. For example, you can
enter the values 80, 360, and 40 for the low, high, and step values for Conc. The
results are displayed below.

Frequency Distribution of Conc Cholesterol Concentration

Cumulative Cumulative
Low High Freq Percent Freq Percent
80 120 1 3.3 1 3.3
120 160 4 13.3 5 16.7
160 200 10 33.3 15 50.0
200 240 6 20.0 21 70.0
240 280 6 20.0 27 90.0
280 320 0 0.0 27 90.0
320 360 3 10.0 30 100.0
Total 30 100.0

If a value falls on an interval boundary, the value is counted in the lower of the two
intervals.

Chapter 4, Summary and Descriptive Statistics 95


Survey Frequency Distribution
The Survey Frequency Distribution procedure produces a compact frequency
tabulation for a group of discrete variables that share the same response values. It
computes the frequency and/or relative frequency of data. It’s a useful way to
display the results of survey questionnaires or similar data.

Specification

Select the variables you want to include in the frequency table. Highlight the
variables you want to select in the Variables list box, then press the right arrow
button to move them to the Frequency Variables list box. You can also move a
variable by double-clicking its name.

Select the values you want reported from the Report buttons: Counts, Percentages,
or Both counts and percentages.

Data The Frequency Variables can be either numeric variables (real or integer), or string
Restrictions variables, but not a combination of numeric and string variables. There can be no
more than 20 unique values for all the frequency variables combined. If the first
variable selected has value labels defined, then those labels will be used for
column headings in the report.

Example The data are from the results of an AP survey used to research Americans’
priorities for the federal government (AP-NORC, 2013). Respondents were asked
to score 20 topics on a scale 1 to 5 with 1 meaning no effort at all and 5 meaning a
great deal of effort. The dialog box above is used to produce the report shown on
the next page.

The report is more concise than the Frequency Distribution report described in the
previous section, and it’s easier to compare results between variables.

96 Statistix 10 User’s Manual


Survey Frequency Distribution

Percentages
Variable Label N 1 2 3 4 5
Q01 Advancing gay rights 557 36.8 18.3 24.4 12.2 8.1
Q02 Overturning Roe v Wade 561 40.3 13.4 21.6 11.4 13.4
Q03 Reforming gun laws 559 24.5 21.5 16.3 16.3 21.5
Q04 Addressing climate change 559 21.5 21.5 28.6 13.2 15.4
Q05 Reducing gender disparities 585 17.3 22.4 32.6 18.3 9.2
Q06 Promoting family values 566 21.2 15.2 24.2 16.1 23.1
Q07 Reducing racial disparities 552 14.3 22.5 29.5 19.4 14.3
Q08 Woman's right to choose 555 17.5 15.5 27.7 16.6 22.7
Q09 Reducing wealth gap 580 16.6 13.4 24.8 19.7 25.7
Q10 Protecting right to vote 559 11.3 17.4 23.4 21.5 26.5
Q11 Preparing natural disasters 594 6.1 16.2 41.4 21.2 15.2
Q12 Reforming immigration 580 7.1 15.2 28.3 21.2 28.3
Q13 Quality of education 569 5.1 9.1 22.5 29.5 33.7
Q14 Improving infrastructure 572 3.1 10.5 39.5 26.0 20.8
Q15 Improving health care 578 7.1 7.1 16.3 27.5 41.9
Q16 Reducing federal deficit 584 4.1 7.2 16.3 25.5 46.9
Q17 Reforming the tax system 572 3.1 9.3 27.6 30.6 29.5
Q18 Reducing unemployment 562 6.0 5.0 20.3 29.4 39.3
Q19 Way government functions 579 4.1 6.0 18.3 29.5 41.8
Q20 Protecting Social Security 587 1.0 5.1 23.7 27.8 42.2

Cross Tabulation
The Cross Tabulation procedure forms a cross tabulation table (also called a
contingency table) for up to five classifying variables. The number of classifying
variables determines the dimension of the table. There’s a table cell for all unique
combinations of values of the classifying variables. A cross tabulation table
displays the number of cases that fall into each of the cross-classified table cells.
In statistical terms, such a table represents the joint frequency distribution of the
classifying variables.

This procedure doesn’t perform any statistical tests. There are other procedures in
Statistix that perform tests for contingency tables. See Chi-Square Test,
McNemar’s Symmetry Test, Two By Two Tables, Log-Linear Models, and
Mantel-Haenzel Test.

Specification

Chapter 4, Summary and Descriptive Statistics 97


To perform a cross tabulation, you simply select the classifying variables from the
Variables list and press the right-arrow button to move them to the Cross Tab
Variables list. The last variable becomes the column classifier and the second-to-
last variable becomes the row classifier. If more than two variables are selected,
the earlier variables become “control” variables. A separate table is produced for
each unique combination of control variable values. These tables are produced in
dictionary order; the levels of the rightmost control variables increment most
rapidly.

A cross tabulation table always contains the counts for each cell. If you want to
have column and row percentages reported for each cell as well, check the Report
Column Percent and Report Row Percent check boxes.

Data You can have up to five classifying variables. Each classifying variable can have
Restrictions up to 500 levels. Classifying variables can have any data type (real, integer, date,
and string). Real values will be truncated to whole numbers. Strings are truncated
to 20 characters.

Example The original data are from Snedecor and Cochran (1980, p. 386), described on
page 90. AgeClass indicates the age class a person was in (for example, AgeClass
= 60 means the person was in the 60 to 69 year age class). ConcClass indicates the
cholesterol concentration class a person was in (for example, ConcClass = 160
means the person’s cholesterol level was in the 160 to 199 mg/100 ml range).

The dialog box on the preceding page illustrates how to request cross tabulation of
AgeClass by ConcClass. The results are displayed below.

Cross Tabulation of AgeClass by ConcClass

ConcClass
AgeClass 80 120 160 200 240 320 Total

10 0 1 1 0 0 0 2

20 0 0 1 0 0 0 1

30 0 2 2 1 0 0 5

40 1 1 4 2 0 0 8

50 0 0 2 2 3 0 7

60 0 0 0 0 1 2 3

70 0 0 0 1 2 1 4

Total 1 4 10 6 6 3 30

Cases Included 30 Missing Cases 0

98 Statistix 10 User’s Manual


Note the diagonal pattern of nonzero cells in the table above. This suggests a
relationship between age and cholesterol level.

Breakdown
Breakdown computes summary statistics for a variable broken into groups and
subgroups in a nested fashion. The dependent variable can be classified by up to
five categorical variables. The summary statistics are displayed for all levels of
nesting. You can select the statistics to be reported from the following list:
number of cases, sum, mean, standard deviation, variance, standard error of the
mean, minimum and maximum.

Specification Select the variable you want to compute the summary statistics for and move it to
the Dependent Variable box. Then select up to five Categorical Variables that
will be used to “break down” the data into groups. The order in which the
categorical variables are selected determines the order of nesting, with the value of
the last variable changing most rapidly.

Check the Statistics to Report boxes for the statistics you want to include in the
report. Select a report format. An example of the List Format is displayed on the
next page. The Crosstabs Format presents the same information in two-
dimensional tables. Check Display Grid to display a grid for the crosstabs format.
Check Wide Report to present the table in a wide format suitable for printing in
landscape mode.

Chapter 4, Summary and Descriptive Statistics 99


Data There can be up to five categorical variables. The categorical variables can be of
Restrictions any data type. Real values will be truncated to whole numbers. Strings will be
truncated to 20 characters.

Example The original data are from Snedecor and Cochran (1980, p. 386), described on
page 90. The dependent variable Conc is the cholesterol concentration of 30
female subjects. The two categorical variables used are State and AgeClass. For
example, suppose you’re interested in the cholesterol concentration means by
states as well as the age-specific means within states. The variables selected in
the dialog box on the preceding page specify that AgeClass be nested within State.
The results are presented below.

Breakdown for Conc Cholesterol Concentration

Variable Level N Sum Mean SD


AgeClass 30 2 383 191.50 13.435
AgeClass 40 3 414 138.00 37.510
AgeClass 50 3 676 225.33 35.076
AgeClass 60 1 249 249.00 M
AgeClass 70 2 563 281.50 81.317
State Iowa 11 2285 207.73 63.795
AgeClass 10 2 326 163.00 36.770
AgeClass 20 1 191 191.00 M
AgeClass 30 3 476 158.67 18.502
AgeClass 40 5 996 199.20 19.791
AgeClass 50 4 941 235.25 30.314
AgeClass 60 2 693 346.50 13.435
AgeClass 70 2 502 251.00 14.142
State Nebraska 19 4125 217.11 58.796
Overall 30 6410 213.67 59.751

Cases Included 30 Missing Cases 0

The indentations of the first two columns (the variable names and their values)
depict the nesting structure. Any variable X indented with respect to another
variable Z means the statistics for the levels of X are nested within the levels of Z.
For example, AgeClass is nested within State. The order of nesting is consistent
with the order in which the classifying variables are listed in the Categorical
Variables list box.

Note that the outer levels of nesting summarize the inner levels. In the example
above, the line labeled “Iowa” summarizes the data for the five age classes listed
above it.

100 Statistix 10 User’s Manual


Stem and Leaf Plot
The Stem and Leaf Plot is a simple but handy way to organize numerical data.
The digits of the individual values are ordered in a table of “stems” and “leaves”
that resembles a histogram when turned sideways. Unlike a histogram, each
original measurement can be read from the plot.

Specification

Select the variables you want to use to produce the stem and leaf plots from the
Variables list box. If you select a Grouping Variable, a separate plot is produced
for each value of the grouping variable.

Extreme values can affect the scale of the stem and leaf plot. If you see extreme
values causing a scaling problem, check the Trim Outliers check box. The
extreme values won’t be omitted completely but will be listed individually outside
the scale.

You can enter Low, High, and Step values to control the scale of the plot. It’s
useful when you want to compare two different plots using the same scale. Since
stem boundaries must fall on whole digits, the step value must be 1, 2, 5, 10, or
multiples of 0.1 or 10 (i.e., 0.1, 0.2, 0.5, 100, 200, etc.).

Data The plot variables must contain numbers. The grouping variable, if used, can be of
Restrictions any data type. Real values will be truncated to whole numbers. Strings will be
truncated to 20 characters.

Example The data are described on page 90. The dialog box above specifies a plot for Age,
which is the age of 30 subjects. The results are given on the next page.

This plot contains all the information of a histogram. In addition, it preserves

Chapter 4, Summary and Descriptive Statistics 101


information about the fine structure of the data. Each number in your data is
divided into two parts, the stem and the leaf. The stem indicates the values of the
most significant digits of an observation, while the leaf gives the least significant
digit. Each digit in the Leaves column is a separate leaf, so there is one leaf for
each case.

Stem and Leaf Plot of Age Age in Years

Leaf Digit Unit = 1 Minimum 18.000


1 8 represents 18 Median 48.000
Maximum 78.000
Depth Stem Leaves
2 1 89
3 2 1
3 2
7 3 0133
8 3 9
13 4 12344
(3) 4 679
14 5 124
11 5 6888
7 6 3
6 6 57
4 7 01
2 7 68

30 cases included 0 missing cases

For example, consider the first row of the plot “1 89". The stem value is 1, and
the leaves are 8 and 9, so you know the digits for the first subject are 1 and 8 and
the digits for the second subject are 1 and 9. You don’t know yet where to put the
decimal point. That is, the numbers could be 1.8, 1.9, or perhaps 18, 19, or even
0.018, 0.019, etc. The message above the body of the plot “1 8 Represents 18.”
tells you that a stem value of 1 and a leaf value of 8 represents the number 18. So
the first two values in our example are 18 and 19.

The first column in a stem and leaf plot is a cumulative frequency column that
starts at both ends of the data and meets in the middle. The row that contains the
median of the data is marked with parentheses around the count of observations for
that row. For rows above the median, the number in the first column is the number
of items in that row plus the number of items in all the rows above. Rows below
the median are just the opposite. If the number of cases is even and the two middle
values fall in different rows, there is no “median row”.

Further details of how to interpret stem and leaf plots can be found in Velleman
and Hoaglin (1981).

102 Statistix 10 User’s Manual


Percentiles
The Percentile procedure computes the percentiles you specify for a list of
variables. A percentile is a value such that a specified percent of the data falls at or
below that value. The median is the 50th percentile. The lower and upper
quartiles are the 25th and 75th percentiles.

Specification

Select the variables for which you want to compute percentiles from the Variables
list and move them to the Percentiles Variables list. Enter one or more Percentile
values you want computed in the space provided.

Example The data are described on page 90. The 10th, 25th, 50th (the median), 75th, and
90th percentiles are computed for the variables Age and Conc, the age and
cholesterol level for a sample of female subjects. The analysis is specified in the
dialog box on the precing page. The results are given below.

Percentiles

Variable Cases 10.0 25.0 50.0 75.0 90.0


Age 30 21.900 37.500 48.000 59.250 70.900
Conc 30 137.30 180.00 199.00 251.00 329.50

Chapter 4, Summary and Descriptive Statistics 103


C H A P T E R

5
Summary Plots
These procedures are designed to help you condense, summarize, and visually
display data. You’ll use them in the preliminary stages of analysis because they
allow you to recognize general patterns and they suggest directions for further
analysis. They’re particularly useful for detecting “unusual” values.

The utility of these procedures isn’t restricted to the preliminary stages of analysis,
however. They’re important tools for evaluating the results of a variety of
analyses. For example, after fitting models to your data, you can use these
procedures to inspect the resulting residuals.

Two procedures graphically depict the frequency distribution of values for a


variable. A Histogram is a bar-chart used to graphically represent the frequencies
of discrete data and the frequency density of continuous data. A Pie Chart
graphically represents frequencies using the slices of a pie.

There are three graphical procedures used to compare the means for several
samples. A Dot Plot uses dots arranged on a series of horizontal lines to represent
the means of a list of variables or groups of a variable. The Error Bar Chart uses
vertical bars to represent the mean and vertical lines to represent the spread
(standard deviation, standard error, or confidence interval) for a list of variables or
groups of a variable. The Pie Chart uses slices of a pie to represent means.

There are three graphical procedures that are useful for comparing the data
distributions of different samples. A Box and Whisker Plot graphically depicts
the median, interquartile range, and range for a list of variables or groups of a
variable. A Quantile Plot is an ordered scatter plot of the values of a sample
plotted against the quantiles of the values. A series of quantile plots can be
displayed for several variables on one plot in juxtaposed panels. A Quantile-
Quantile Plot, or Q-Q-Plot, is a scatter plot where the quantiles of two samples are
plotted against one another

The Scatter Plot is used to plot a two dimensional scatter diagram. Up to six pairs
of X and Y variables can be plotted on the same plot.

The Scatter Plot Matrix produces a matrix of juxtaposed scatter plots making it
easy to evaluate the relationships among several variables on a single plot.

A conditioning plot, or Coplot, is a graphical method for examining how a


response variable depends on two or more factor variables.

104 Statistix 10 User’s Manual


Histogram
The Histogram procedure produces a bar graph frequency distribution for discrete
or continuous variables. A histogram can summarize large amounts of data in a
single visual image.

Specification

First you select the method of specifying the analysis by selecting one of the Model
Specification radio buttons. If you want to plot a single histogram for a single
variable, which is usually the case, either method can be used. Select either
method and then move the name of the variable you want to plot from the
Variables list to either the Dependent Variable or Table Variables box.

You can also use this procedure to plot several juxtaposed histograms using a
single scale. The model specification method you choose depends on how you’ve
organized the data you want to plot. Select the Categorical method if you want to
plot the data of a single variable (the Dependent Variable) using a second
classifying variable that identifies groups (the Categorical Variable). This will
produce a series of histograms, one for each level of the classifying variable.

Select the Table method if you want to plot juxtaposed histograms using the data
of several variables. Move the names of the variables you want to plot from the
Variables list to the Table Variables list.

When using the table method for more than one table variable, indicate how you
want the multiple variables handled by selecting one of the Multiple Table
Variables buttons. Select Juxtaposed Histograms to have a separate histogram
plotted for each variable. Select Combine Table Variables to have the data for all

Chapter 5, Summary Plots 105


the selected variables combined to plot a single histogram.

Juxtaposed histograms are displayed in a matrix of panels with one or more


columns and one or more rows. Enter a value in the Juxtaposed Columns box to
control the number of columns and rows in the matrix.

Select the Scale: Count or Percent. Check the Display Normal Curve check box
to superimpose a normal curve over the bars of the histogram.

You can enter Low, High, and Step values to control the X and Y axis scales. You
use this feature to create a meaningful interval width and interval boundaries. You
can also use it to limit the range of data for the specified variable in order to
eliminate outliers or to concentrate the plot on a particular range of values.

Histogram bars are drawn using fill color 1. The normal curve is drawn using line
color 2. See Plot Color settings discussed in Chapter 1.

Data Variables can be of any data type. Strings will be truncated to 20 characters. The
Restrictions maximum number of bars (specified using low, high, step for the X axis) is 10,000.
The maximum number of juxtaposed histograms is 36, displayed in a matrix with
at most six rows and six columns.

Example The data are described on page 90. The dialog box on the preceding page plots a
histogram with a normal curve for the variable Conc. The results are displayed
below.

106 Statistix 10 User’s Manual


The example of juxtaposed histograms displayed below was specified using Conc
as the dependent variable and State as the categorical variable.

Click on the Settings icon to change the titles, fonts, or colors.

Cumulative Distribution Plot


This procedure produces a plot of an empirical cumulative distribution function
F(x), where F(x) is defined as the proportion of X values less than or equal to x.
This plot, like the Histogram, is useful for examining the distribution of a sample
of data.

Specification An example dialog box appears on the next page. Select the method of specifying
the analysis by selecting one of the Model Specification radio buttons. If you want
to plot a single variable, either method can be used. Select either method and then
move the name of the variable you want to plot from the Variables list to either the
Dependent Variable or Table Variables box.

You can also use this procedure to plot the cumulative distribution functions of
several samples at once. The model specification method you choose depends on
how you’ve organized the data you want to plot. Select the Categorical method if
you want to plot the data of a single variable (the Dependent Variable) using a
second classifying variable that identifies groups (the Categorical Variable). This
will produce a series of plots, one for each level of the classifying variable.

Chapter 5, Summary Plots 107


Select the Table method if you want to plot cumulative distribution functions using
the data of several variables. Move the names of the variables you want to plot
from the Variables list to the Table Variables list.

When plotting multiple samples, indicate how you want the multiple samples
handled by selecting one of the Multiple Samples buttons. Select Juxtaposed Plots
to have a function plotted for each sample in separate panels. Select Super-
imposed Plots to plot a function for each sample in a single panel. Select
Combine All Samples to have the data for all the selected samples combined to
plot a single cumulative distribution function.

Juxtaposed plots are displayed in a matrix of panels with one or more columns and
one or more rows. Enter a value in the Juxtaposed Columns box to control the
number of columns and rows in the matrix.

Data The dependent and table variables must be real, integer, or date variables. The
Restrictions categorical variable can be any data type. Strings will be truncated to 20
characters. The maximum number of juxtaposed samples is 36, displayed in a
matrix with at most six rows and six columns. The maximum number of
superimposed samples is six. The maximum number of intervals (specified using
low, high, step for the X axis) is 10,000.

Example The data are described on page 90. The dialog box above plots superimposed
cumulative distribution functions for cholesterol concentration by state: Iowa and
Nebraska. The results are displayed on the next page.

108 Statistix 10 User’s Manual


Click on the Settings icon to change the titles, fonts, or colors.

Dot Plot
Dot plots graphically represent the means or medians of several samples. They
usually convey quantitative information better than bar charts or pie charts. You
can also use this procedure to plot one-dimensional scatter plots.

Specification

Chapter 5, Summary Plots 109


First you select the method of specifying the analysis by selecting one of the Model
Specification radio buttons. The method you choose depends on how you’ve
organized the data you want to plot.

Select the Categorical method if you want to plot the data of a single variable (the
Dependent Variable) using one or two classifying variables that identify groups
(the Categorical Variables). One horizontal line appears on the plot representing
each value of the first categorical variable selected. If you select a second
categorical variable, a series of superimposed or juxtaposed dot plots will be
displayed, one for each value of the second categorical variable (a two-way dot
plot).

If you want to plot the data of several variables, one horizontal line for each
variable, select the Table method. Move the names of the variables you want to
plot from the Variables list to the Table Variables list.

Select one of the Dot Values buttons: Mean, Median, or All Values. If you select
All Values, a one-dimensional scatter plot with symbols plotted for each value is
plotted for each sample. You may want to display dot plots for data that are
already summarized (exactly one value for each sample), in which case you should
select Mean.

Select one of the Sort Method buttons to control the order that the samples appear.
The Standard method sorts numeric data (including numeric variables with value
labels) numerically from bottom to top, and string data alphabetically from top to
bottom. The By Value method sorts the samples based on the means from smallest
to largest. Sorting by value increases the perception of the distribution of values
and is recommended when the samples don’t have a natural order.

When using the Categorical method of model specification with two categorical
variables selected, you have a Two-way Classification dot plot. Select
Superimpose to display a two-way dot plot in a single panel using different
sequence plot symbols to represent the different values of the second categorical
variable. Select Juxtapose to display juxtaposed dot plots, one panel for each
value of the second categorical variable.

Juxtaposed dot plots are displayed in a matrix of panels with one or more columns
and one or more rows. Enter a value in the Juxtaposed Columns box to control the
number of columns and rows in the matrix. You can enter low, high, and step
values to control the scales of the X Axis.

Data When using the table method, you can select up to 40 variables. Categorical
Restrictions variables can be of any data type. Strings will be truncated to 20 characters. When
using the categorical method, the first categorical variable can have up to 40 levels.
The second categorical variable, if any, can have up to six levels for superimposed
plots and up to 36 levels for juxtaposed plots.

110 Statistix 10 User’s Manual


Examples The data are described on page 90. The dialog box above is used to create a dot
plot of Conc by AgeClass and State. The results are displayed below.

Each symbol in the example dot plot represents the mean cholesterol concentration
for the corresponding age class. The circles represent the means for Iowa patients,
the diamonds for Nebraska patients.

The dot plot below is an example of a two-way dot plot with juxtaposed plots. It
was specified by selecting Conc for the dependent variable and AgeClass and State
for the categorical variables.

The dot plot below is an example of using the All Values option to display a series
of one-dimensional scatter plots. One dot is plotted for each value of cholesterol
concentration, rather than one dot for the mean.

Click on the Settings icon to change the titles, fonts, symbols, or colors.

Chapter 5, Summary Plots 111


Error Bar Chart
The Error Bar Chart graphically displays the mean and spread (confidence
interval, standard deviation, or standard error) for a list of variables, or for one
variable broken into groups by one or two categorical variables. The means are
represented using vertical bars or circles, and the standard deviations are
represented using a vertical line centered on the mean.

Specification

First you select the method of specifying the analysis by selecting one of two
Model Specifications. Select the Categorical method if you want to plot the data
of a single variable (the Dependent Variable) divided into groups by one or two
classifying variables (the Categorical Variables). Using one categorical variable
produces a series of bars, one for each level of the categorical variable. If you
enter two categorical variables, the first is used to define the X axis and the second
is used to further subdivide the data into sub-bars (see the example chart below).

Select the Table method to plot the data of several variables side by side.

Next select the Chart Type, either bar chart or line chart. The bar chart uses
vertical bars to represent the means. The line chart uses circles to mark the means,
and the circles are connected sequentially with lines. The line chart is sometimes
used when the grouping variable is ordered in a meaningful way, such as months of
the year.

You have six choices for Error Bar Type: 50% & 95% confidence intervals, 95%
confidence intervals, 99% confidence intervals, standard deviation, standard error

112 Statistix 10 User’s Manual


of the mean, and no error bars.

Data You can select up to 40 variables when using the table method. When using the
Restrictions categorical method, the first categorical variable can have up to 40 levels and the
second categorical variable, if any, can have up to six levels.

Example The data are described on page 90. Conc is the cholesterol concentration,
AgeClass indicates the age class, and State indicates in which of two states the
subject lives. The seven observations associated with age classes 10, 20, and 70
were omitted for this analysis using the Omit/Select Cases procedure. The analysis
is specified on the preceding page. The resulting chart is shown below.

The values for the error bars (C.I., SD, etc.) are computed independently for each
sample. Use the Means Plot option available on the results menus for the analysis
of variance procedures for error bars computed using a pooled estimate of error
(Chapter 9).

Click on the Settings icon to change the titles, fonts, symbols, or colors. Bar charts
use your fill color selections for series 1 and series 2 to draw the bars. Line plots
use your selections for plot symbols for series 1 and series 2.

Chapter 5, Summary Plots 113


Pie Chart
This procedure plots a pie chart. Pie charts are often used to graphically display a
frequency distribution, but you can also apply the chart to sums or means.

Specification First select a Categorical Variable. The pie chart has one pie slice for each value
found for this variable.

If you want to base the size of pie slices on sums or means, then select the variable
containing the data to use to compute the sums or means and move it to the
Summary Variable box. Don’t select a summary variable if you want pie slices to
represent counts.

Pie slices are labeled using the values found in the categorical variable. These
labels can also include the summarized values (counts, sums, or means), or the
percent of the total. Make your choice by selecting one of the Display Values
radio buttons.

You can enter a value for the Max Number of Slices to limit the number of pie
slices. If the categorical variable has more levels than the maximum number you
indicate, the extra levels are grouped into one pie slice labeled “Other”.

To emphasize a slice, you can explode it (pull it away from the rest of the circle).
Enter the value for the categorical variable corresponding to the slice you want to
emphasize in the Explode Value box.

The pie slices are normally ordered clockwise by the levels of the categorical
variable. Check the Sort by Size check box to have the slices ordered by the size
of the slices instead. Check the 3D check box for a three-dimensional version.

114 Statistix 10 User’s Manual


Example The data are described on page 90. The dialog box above graphs a pie chart for the
variable AgeClass. The results are displayed below.

The example pie chart above is a simple frequency distribution based on counts.
Pie charts are also useful for showing how sums are proportioned. Suppose you
have a data set of transactions for a business that include the dollar amount of the
transaction and the expense category for each transaction. By selecting the
variable for expense category as your Categorical Variable, the variable for the
dollar amount as your Summary Variable, and selecting Sum for the Summary
Function, you’ll create a pie chart that shows how expenses are distributed among
categories.

If the data you want to chart is already tabulated, create a data set with one case for
each pie slice and select Sum for the Summary Function.

Click on the Settings icon to change the titles, fonts, or colors.

Box and Whisker Plot


The Box and Whisker Plot procedure computes box plots that graphically present
measurements of central tendency and variability. A series of box plots can be
displayed side by side, which can dramatically illustrate differences between
groups.

Specification First you choose the method of specifying the analysis by selecting one of the

Chapter 5, Summary Plots 115


Model Specification radio buttons. The method you choose depends on how
you’ve organized the data you want to plot.

Select the Categorical method if you want to plot the data of a single variable (the
Dependent Variable) using a second classifying variable that identifies groups (the
Categorical Variable). This produces a series of box plots, one for each level of
the classifying variable.

Select the Table method if you want to plot the data of several variables side by
side. Move the names of the variables you want to plot from the Variables list to
the Table Variables list.

Enter values in the low, high, step for the Y-Axis to control the scale of the Y axis.
Select an Orientation for the boxes: vertical or horizontal.

Data You may select up to 40 table variables. The categorical variable can be of any
Restrictions type. Real values for the categorical variable will be truncated to whole numbers.
Strings will be truncated to 20 characters.

Example The data are described on page 90. The dialog box on the preceding page specifies
box plots for Conc (cholesterol concentration) grouped by AgeClass. The resulting
plot is displayed on the next page.

These box plots powerfully illustrate that cholesterol concentration increases with
age. Each box plot is composed of a box and two whiskers. The box encloses the
middle half of the data, bounded by the lower and upper quartiles. The box is
bisected by a line at the value for the median. The vertical lines at the top and
bottom of the box are called the whiskers, and they indicate the range of “typical”

116 Statistix 10 User’s Manual


data values. Whiskers always end at the value of an actual data point and can’t be
longer than 1½ times the size of the box.

Extreme values are displayed as stars for possible outliers and circles for probable
outliers. Possible outliers are values that are outside the box boundaries by more
than 1½ times the size of the box. Probable outliers are values that are outside the
box boundaries by more than 3 times the size of the box.

Click on the Settings icon to change the titles, fonts, or colors.

Computational See Velleman and Hoaglin (1981) or Cleveland (1994) for details.
Notes

Quantile Plot
The Quantile Plot is useful for graphically comparing the distributions of two or
more samples. The name of the plot derives from the feature that individual
quantiles can be read from the plot. This makes it more powerful than comparing
histograms or one-dimensional scatter plots.

Specification Select one or more Plot Variables. A panel containing a quantile plot will be
plotted for each plot variable selected.

Chapter 5, Summary Plots 117


Alternatively, you can select one plot variable plus a Grouping Variable. One
panel containing a quantile plot will be displayed for each value of the grouping
variable.

Quantile plots are displayed in a matrix of panels with one or more columns and
one or more rows. Enter a value in the Juxtaposed Columns box to control the
number of columns and rows in the matrix.

You can enter low, high, and step values to control the scale of the Y Axis to give
it a more appealing appearance.

Data You may specify up to 36 plot variables, or one plot variable plus a grouping
Restrictions variable with up to 36 levels. The grouping variable can be of any data type.
Strings will be truncated to 20 characters.

Example The data are described on page 90. The dialog box above is used to specify a
quantile plot comparing the distributions of cholesterol concentration of women
from two states. The results are displayed on the next page.

An f quantile of a distribution is a number, q, such that approximately a fraction f


of the values of the distribution is less than or equal to q; f is the f-value of q. The
x-axis is labeled Fraction, which shows the f-values. The y-axis shows the values
for q.

The distributions of the samples from the two states appear similar. Specific
quantiles can be read from the plots and compared. For example, the medians
(f-value equal to 0.5) are about 200 for both groups. Similarly, the first quartiles
are about 180 for both groups.

Click on the Settings icon to change the titles, fonts, symbols, or colors.

118 Statistix 10 User’s Manual


Computational See Cleveland (1994) for details.
Notes

Q-Q Plot
The Q-Q Plot procedure is used to compare the distribution of two samples by
plotting corresponding quantiles of each sample against one another on a X-Y plot.
A Tukey’s mean-difference plot can be plotted alongside the Q-Q plot to further
identify differences.

Specification First you select the method of specifying the analysis by selecting one of the Model
Specification radio buttons. The method you choose depends on how you’ve
organized the data you want to plot. Select the Categorical method if you want to
plot the data of a single variable (the Dependent Variable) using a classifying
variable that identifies the two groups (the Categorical Variable).

Select the Table method if you want to plot the data of two variables, one will be
plotted on the x-axis and one on the y-axis. Move the names of the variables you
want to plot from the Variables list to the Table Variables list.

Check the Tukey mean-difference plot box to have a mean-difference plot


displayed in a panel to the right of the Q-Q plot. You can enter low, high, and step

Chapter 5, Summary Plots 119


values to control the scales of the X & Y Axis. Both axises use the same scale.

Example The data are described on page 90. The dialog box above is used to specify a Q-Q
plot comparing blood cholesterol level of female subjects from two states. The
results are displayed below.

The left panel contains the Q-Q plot. Except for the leftmost two points, the points
are more-or-less scattered evenly around the diagonal reference line indicating that
the distributions of the two samples are similar. The right panel contains the
mean-difference plot. There’s a point plotted corresponding to each pair of

120 Statistix 10 User’s Manual


quantiles indicating the difference: Nebraska ! Iowa.

The three Q-Q plots of made-up data below illustrate patterns of differences
between distributions.

In this Q-Q plot, the medians are


similar, but the values for sample B
are more spread out. The high values
of sample B are higher than the high
values of sample A, and the low
values of sample B are higher than the
low values of sample A.

In this Q-Q plot, the values of sample


A are about 10 units greater than
those of sample B throughout the
range of the distribution.

In this Q-Q plot, the values of sample


A are greater than those of sample B,
but not by a constant amount
throughout the range of the
distribution. The values of sample A
are about 20% greater than those of
sample B.

Chapter 5, Summary Plots 121


Computational See Cleveland (1994) for details.
Notes

Scatter Plot
The Scatter Plot procedure is used to produce a bivariate scatter diagram. Pairs of
numbers are plotted as points on a X-Y plot.

When investigating possible relationships between variables, plotting the data


should be one of your first steps. Visual inspection of the data is invaluable and
often reveals features of the data that would be overlooked if you proceeded
directly with your statistical analyses.

Specification

Select an X Axis Variable and a Y Axis Variable. The arrow buttons to the left of
each list are used to select and deselect variables for the respective lists. You can
select additional variable name pairs. All pairs are displayed on the same panel
using different symbols.

You can select a Group Variable to have the points plotted using a different scatter
plot symbol for each value of the group variable.

You can enter low, high, and step values to control the scales of either the X Axis
or the Y Axis. If you enter values for either axis, only points that fall between
these values will be plotted. This option is useful for eliminating outliers from the
plot or zooming in on a particular portion of the plot.

122 Statistix 10 User’s Manual


Select an option from the Fitted Curve drop-down list to superimpose a line or
curve on the scatter plot. The options are: None, Connect Points, Linear
Regression, and Loess. Selecting Connect Points draws straight lines between
successive points in a fashion similar to a time series plot (Chapter 11). Selecting
Linear Regression draws a straight line through the points fitted using linear
regression (Chapter 8). Selecting Loess draws a smoothed curve through the
points using locally weighted regression (Chapter 8). The loess curve requires two
additional parameters. Loess Alpha is a smoothing constant normally between
0.50 and 1.0. The higher the value, the smoother the curve. Loess Degree is either
Linear, suitable for nearly linear relationships, or Quadratic, suitable for nonlinear
relationships. The Loess procedure described in Chapter 8 offers additional
options.

Example The data are described on page 90. The dialog box on the preceding page is used
to specify a scatter plot for Conc vs. Age, the blood cholesterol level and age of 30
female subjects. The results are displayed below.

The fitted linear regression line in the plot above makes it easier to see the linear
relationship between age and cholesterol. A fitted line or curve can also be useful
as a reference line to spot nonlinear relationships.

Click on the Settings icon to change the titles, fonts, symbols, or colors.

Chapter 5, Summary Plots 123


Scatter Plot Matrix
The Scatter Plot Matrix procedure is used to study multidimensional data. It
produces a matrix of bivariate scatter diagrams using shared scales.

Specification

Select up to ten Column Variables and ten Row Variables. If you want to use the
same variables for columns and rows, you can leave the Row Variables box empty.

Select an option from the Fitted Curve drop-down list to superimpose a line or
curve on each scatter plot. The options are: None, Linear Regression, and Loess.
Selecting Linear Regression draws a straight line through the points fitted using
linear regression (Chapter 8). Selecting Loess draws a smoothed curve through the
points using locally weighted regression (Chapter 8). The loess curve requires two
additional parameters. Loess Alpha is a smoothing constant normally between
0.50 and 1.0. The higher the value, the smoother the curve. Loess Degree is either
Linear, suitable for nearly linear relationships, or Quadratic, suitable for nonlinear
relationships.

Example The example data are from an environmental study (Bruntz et al., 1974) where the
effects of radiation, temperature, and wind speed on the air pollutant ozone were
studied. You can view the data by opening the file Sample Data\Air.sx. The dialog
box above is used to specify the scatter plot matrix displayed on the next page.

The important feature of a scatter plot matrix is that you can scan a row, or a
column, and see one variable graphed against all others on a single scale. For
example, by looking at the bottom row, you can see that ozone levels are low for

124 Statistix 10 User’s Manual


low levels of radiation and both low and high for high levels of radiation (ozone
formation requires and minimum amount of radiation). Ozone levels are highest
on days with high temperatures and low wind speeds (stagnant days).

The addition of the loess smoothed curves makes it easier to see the relationship
between pairs of variables.

Click on the Settings icon to change the titles, fonts, symbols, or colors.

Coplot
A Coplot, also called a conditioning plot, graphically depicts conditional
dependence. A matrix of bivariate scatter plots (called dependency panels), are
displayed conditioned on one or two factors. It is a powerful method for studying
how a response variable depends on two or more factors.

The Loess, Polynomial Regression, and Nonlinear Regression procedures have


coplot procedures on their respective results menus for displaying coplots of fitted
surfaces. See Chapter 8.

Specification First select the Dependant Variable. It’s values will be plotted along the y-axis of
each scatter plot. Next select one of your factor variables to be used for the X Axis
Variable. It’s values will be plotted along the x-axis of each scatter plot.

Chapter 5, Summary Plots 125


Select one of the remaining factor variables to use as the Horizontal Given
Variable. Given variables can be either categorical variables, or continuous
variables. Specify the number of Horizontal Intervals. If the given variable
contains categorical data, enter the number of discrete values. For continuous data,
the range of values will be divided into the specified number of intervals with
roughly the same number of observations in each interval. When selecting a value
for the number of intervals, select a small enough number so that there’ll be a
sufficient number of points plotted in each dependancy panel to show a pattern,
and a number large enough to provide sufficient resolution.

Enter a value for the Horizontal Overlap. Adjacent intervals will share
approximately the specified proportion of points. Overlapping intervals provides
more points for each dependency panel, and smooths the transition from one panel
to the next.

If you have more than two factor variables, select one to use as the Vertical Given
Variable. Specify the number of Vertical Intervals and enter a value for the
Vertical Overlap in the same manner as you did for the horizontal given variable.

You may enter low, high, and step values to control the scales of the X-Axis and
Y-Axis. Be careful not to enter a range that will exclude any data.

Select an option from the Fitted Curve drop-down list to superimpose a line or
curve on each scatter plot. The options are: None, Linear Regression, and Loess.
Selecting Linear Regression draws a straight line through the points fitted using
linear regression (Chapter 8). Selecting Loess draws a smoothed curve through the

126 Statistix 10 User’s Manual


points using locally weighted regression (Chapter 8). The loess curve requires two
additional parameters. Loess Alpha is a smoothing constant normally between
0.50 and 1.0. The higher the value, the smoother the curve. Loess Degree is either
Linear, suitable for nearly linear relationships, or Quadratic, suitable for nonlinear
relationships.

Data When only a horizontal given variable is specified, the maximum number of
Restrictions horizontal intervals is 16. When both horizontal and vertical given variables are
used, the maximum number of intervals is six each.

Example I The data for the first example are from an experiment where the combined amount
of nitric oxide plus nitrogen dioxide produced from an engine using ethanol for
fuel was measured for varying levels of predictor variables E, the equivalence ratio
(a measure of the richness of air and fuel mixture), and C, the compression ratio
(Cleveland et al., 1992). You can view the data by opening the file Sample
Data\Ethanol.sx. The dialog box on the preceding page is used to specify the
coplot displayed below.

The narrow panel at the top of the plot is called the given panel. The rectangles in
the panel shows the range of values in each interval of the given variable. The
panels below the given panel are called the dependency panels, one panel for each
interval of the given variable. The panels are ordered from left to right and bottom
to top. The panel corresponding to the first interval is located at the bottom left,
and the panel corresponding to the last interval is located at the top right.

On each dependency panel, NOx is graphed against C for those observations where
the values of E lie in the corresponding given interval. The example coplot reveals

Chapter 5, Summary Plots 127


an interaction between C and E. For low values of E, NOx increases with C, and
for high values of E, NOx is a constant function of C.

A complete evaluation of these data would include a second coplot reversing the
roles of the variables C and E, (not shown here).

Example II The data for this example are from an environmental study (Bruntz et al., 1974)
where the effects of radiation, temperature, and wind speed on the air pollutant
ozone were studied. You can view the data by opening the file Sample Data\Air.sx.
The coplot on the next page graphs the dependent variable Ozone against the factor
variable Radiation, conditioned on the two remaining factor variables Temp
(temperature) and Wind (wind speed).

In this example, there are two given panels, one for temperature and one for wind
speed. Each given panel shows the ranges of three overlapping intervals. The
dependency panels plot ozone against radiation for each combination of the
intervals of the given variables. The left column of panels correspond to the
lowest interval for temperature. The bottom row of panels correspond to the
lowest interval for wind speed. Note how the effect of radiation is greatest in the
lower right panel corresponding to high temperatures and low wind speeds.

Click on the Settings icon to change the titles, fonts, symbols, or colors.

Computational See Cleveland (1994) for computational details.


Notes

128 Statistix 10 User’s Manual


C H A P T E R

6
One, Two, & Multi-Sample Tests
Statistix offers a number of procedures to test hypotheses about the central values
of the population distributions from which the samples are drawn. These
procedures are often referred to as tests of location. Several of these tests are
parametric and require the assumption that the data are normally distributed.
Nonparametric tests are provided for situations where the assumption of normality
isn’t appropriate. When their assumptions are appropriate, parametric tests are
generally more powerful than their nonparametric equivalents, although
nonparametric tests often compare quite well in performance. The parametric
versions test hypotheses concerning the group means. The nonparametric
procedures test central value hypotheses based on measures other than the mean.

The One-Sample T Test is used to test hypotheses about sample means.

The Paired T Test is a parametric test used to test for differences between means
of two groups when the samples are made in pairs.

The Sign Test and Wilcoxon Signed Rank Test are nonparametric alternatives to
the Paired T Test.

The Two-Sample T Test is a parametric test that tests for a difference in the
means of two groups when the samples are drawn independently from two
normally distributed populations.

The Wilcoxon Rank Sum Test and Median Test are nonparametric alternatives
to the Two-sample T Test.

The One-Way AOV is a multi-sample test that tests for differences among the
means of several groups.

The Kruskal-Wallis One-Way AOV is a nonparametric alternative to the One-


Way AOV.

The Friedman Two-Way AOV is a nonparametric alternative to the two-way


analysis of variance. The General AOV/AOCV procedure, which is discussed in
Chapter 9, performs parametric tests with two or more classifying attributes.

Background on the parametric tests can be found in Snedecor and Cochran (1980).
Hollander and Wolfe (1973), Lehmann (1975), and Siegel and Castellan (1988) are
good references for the nonparametric procedures.

Chapter 6, One, Two, & Multi-Sample Tests 129


One-Sample T Test
The One-Sample T Test is used to test whether the mean of a sample drawn from
a normal population differs from a hypothesized value.

Specification

Select the variable from the Variables list that contains the sample values.
Double-clicking a variable will move it to the Sample Variable box. Enter a value
for the null hypothesis. Select the two-sided alternative hypothesis “µ … µ0”, or a
one-sided alternative “µ < µ0” or “µ > µ0”.

Example Ten frozen dinners labeled “200 calories” were randomly selected from a day’s
production at a factory. The caloric content of the dinners were measured at 198,
203, 223, 196, 202, 189, 208, 215, 218, 207. The dialog box above illustrates how
to test the hypothesis that the average number of calories in a dinner is 200. The
results are displayed below.

One-Sample T Test

Null Hypothesis: µ = 200


Alternative Hyp: µ … 200
Lower Upper
Variable Mean SE T DF P 95% C.I. 95% C.I.
Calories 205.90 3.3282 1.77 9 0.1100 198.37 213.43

Cases Included 10 Missing Cases 0

The report includes the mean, standard error, t-test, p-value, and confidence
interval. Since the p-value of 0.1100 is larger than the typical value of 0.05 for the
rejection level, the null hypothesis isn’t rejected in this example.

130 Statistix 10 User’s Manual


Paired T Test
The Paired T Test, a parametric procedure, is useful for testing whether the
means of two groups are different, where the samples were drawn in pairs. The
test is actually testing whether the mean of the differences of the pairs is different
from zero, or from some other hypothesized value.

Specification

Select the two variables that contain the paired samples. Highlight the variables
you want to select in the Variables list, then press the right-arrow button to move
them to the Sample Variables list box. The typical Null Hypothesis (µD) is that
the difference is zero, but you can enter a different value. You can also select an
Alternative Hypothesis: “Difference … µD”, “Difference < µD”, or “Difference >
µD”.

Example The data for this example (Snedecor and Cochran, 1980, p. 87) concern the
number of lesions produced on a tobacco leaf by the application of two different
viral preparations. The halves of a leaf constitute a pair. The data for the first
preparation are in variable Virus1, and that for the second preparation are in Virus2
(see Sample Data\Tobacco.sx).

Case VIRUS1 VIRUS2


1 31 18
2 20 17
3 18 14
4 17 11
5 9 10
6 8 7
7 10 5
8 7 6

The analysis is specified above. The results are presented below.

Chapter 6, One, Two, & Multi-Sample Tests 131


Paired T Test for Virus1 - Virus2

Null Hypothesis: difference = 0


Alternative Hyp: difference … 0

Mean 4.0000
Std Error 1.5236
Lower 95% CI 0.3972

Mean - H0 4.0000
Upper 95% CI 7.6028
T 2.63
DF 7
P 0.0341

Cases Included 8 Missing Cases 0

The null hypothesis is that the mean of the differences is zero. If the assumption of
normality is appropriate, the small p-value (0.0341) suggests that the mean of the
differences is not zero, i.e., the two different viral preparations do cause lesions at
different rates. (The Shapiro-Wilk Test and Normal Probability Plot can be used
to examine the assumption of normality.)

It’s not appropriate to use this test if the data aren’t paired. Let’s call the unit from
which the two members of the pair were drawn a block. For example, the blocks
may be individuals and the two members of the data pair are reaction times before
and after ingestion of some test medication. The advantage of a paired test is that
it removes variation in the data due to blocks; the data used for the test are the pair
differences within the blocks. The “noise” in the data is due to the fact that some
individuals have naturally faster or slower reaction times regardless of the
medication would thus be eliminated. (The paired t test is a special case of a
randomized block design analysis of variance.) This analysis isn’t very efficient if
the pair members aren’t correlated within blocks; in this case a Two-Sample T Test
should be considered instead. Snedecor and Cochran (1980, p. 99-102) give
further detail on paired versus independent sampling.

Sign Test
The Sign Test is a nonparametric alternative to the Paired T Test. It requires
virtually no assumptions about the paired samples other than that they are random
and independent. On the negative side, it’s not as powerful as the Paired T Test
or Wilcoxon Signed Rank Test. However, it’s especially useful for situations
where quantitative measures are difficult to obtain but where a member of the pair
can be judged “greater than” or “less than” the other member of the pair.

As with other paired t tests, it assumes that you have two groups and that you have
drawn your samples in pairs. The only information in the data which the sign test

132 Statistix 10 User’s Manual


uses is whether, within a pair, the item from the first group was greater than (“+”)
or less than (“!”) the item in the second group. If there is no consistent difference
between the groups, there should be an equal number of “+”s and “!”s in the data
except for random variation.

Specification

Select the two variables that contain the paired samples. Highlight the variables in
the Variables list, then press the right-arrow button to move them to the Sample
Variables list box. You can also move a variable by double-clicking on the
variable name.

Example The data for this example (Snedecor and Cochran, 1980, p. 87) concern the
number of lesions produced on a tobacco leaf by the application of two different
viral preparations. The halves of a leaf constitute a pair. The data for the first
preparation are in variable Virus1, and that for the second preparation are in Virus2
(see Sample Data\Tobacco.sx).

Case Virus1 Virus2


1 31 18
2 20 17
3 18 14
4 17 11
5 9 10
6 8 7
7 10 5
8 7 6
The analysis is specified above. The results are displayed on the next page.

The null hypothesis tested by the sign test is that the median of the differences is
zero. The calculated probability is the binomial probability of observing as few or
fewer of the less abundant sign, given that an individual difference is equally likely
to be of either sign.

Chapter 6, One, Two, & Multi-Sample Tests 133


Sign Test for Virus2 - Virus1

Number of Negative Differences 7


Number of Positive Differences 1
Number of Zero Differences Dropped 0

Exact two-tailed p-value 0.0703

A value is counted as a zero if its


absolute value is less than 0.00001

Cases Included 8 Missing Cases 0

For the virus example, the calculated probability is the probability of observing
one or fewer negative differences, or one or fewer positive differences, in a random
sample of eight. This is a two-tailed probability; halving it produces the correct
one-tailed value. The two-tailed p-value for the example is 0.0703, somewhat
larger than the p-value observed with the Paired T Test.

Computational The probability is calculated using the same routine as in the binomial function in
Notes Probability Functions. The parameter P is set to 0.5.

Wilcoxon Signed Rank Test


The Wilcoxon Signed Rank Test is a nonparametric alternative to the Paired T
Test. It’s generally more powerful than the Sign Test. As with other paired tests,
it assumes that you have two groups and that you have drawn your sample in pairs.
Each pair contains an item from the first group and an item from the second group.
This procedure tests the hypothesis that the frequency distributions for the two
groups are identical. Exact p-values are computed for small sample sizes.

Specification

134 Statistix 10 User’s Manual


Select the two variables containing the paired samples. Highlight the variables in
the Variables list, then press the right-arrow button to move them to the Sample
Variables list box.

Example The data for this example (Snedecor and Cochran, 1980, p. 87) concern the
number of lesions produced on a tobacco leaf by the application of two different
viral preparations. The halves of a leaf constitute a pair. The data for the first
preparation are in variable Virus1, and that for the second preparation are in
Virus2. See Paired T Test on page 131 for the data listing, or open the data file
Sample Data\Tobacco.sx.

The differences are first ranked by absolute value. Tied values are given a mean
rank (Hollander and Wolfe 1973). Differences are considered to be tied if they are
within 0.00001 of one another. The ranks are given the same signs that the
original differences had. The negative and positive signed ranks are then summed
separately. The test statistic is W, the sum of the negative and positive signed
ranks.

The analysis is specified on the preceding page. The results are displayed below.

Wilcoxon Signed Rank Test for Virus1 - Virus2

Sum of Negative Ranks -2


Sum of Positive Ranks 34
Sum of Signed Ranks, W 32

Exact two-tailed p-value 0.0234

Normal approximation, Z 2.21


Normal approx, two-tailed P 0.0274

Total number of values that were tied 3


Number of zero differences dropped 0
Number of missing cases 0
Number of unusable cases 8
Max. diff. allowed between ties 0.00001

Suppose the frequency distributions for groups one and two were the same. The
frequency distribution of the differences of the pairs would then be symmetrical
and have a median of zero. In this instance, the absolute values of the sums of
negative and positive signed ranks would be expected to be “similar”. The signed
rank test tests the null hypothesis that the median of the differences equals zero.

The exact p-value for the Wilcoxon signed rank test are computed for small to
moderate sample sizes (26 or fewer cases). When ties are found to be present, the
“exact probability” is no longer exact but will usually be a good approximation.
When sample sizes are moderate to large, the normal approximation statistic gives
reliable results. The p-value for the normal approximation is two-tailed. The
normal approximation includes a correction for continuity; its use is described in
Snedecor and Cochran (1980, p. 142).

Chapter 6, One, Two, & Multi-Sample Tests 135


In the example, the exact p-value is 0.0234. This is fairly close to the p-value of
0.0274 using the normal approximation. As with the t test, these results suggest
that the preparations do produce lesions at different rates. While the paired t test is
a more powerful test than the signed rank test, the difference in power is often not
great. The signed rank test is a popular alternative because it requires much less
restrictive assumptions about the data.

The exact p-value routine is based on the p-value routine for the Wilcoxon Rank
Sum Test. It exploits the fact that the null distribution of the signed rank statistic
can be factored as a product of a binomial distribution and the null distribution of
the rank sum statistic (Bickel and Doksum 1977).

Two-Sample T Test
This procedure computes two-sample t tests, which test for differences between the
means of two independent samples. It’s applicable in situations where samples are
drawn independently from two normally distributed groups. Two t tests are
computed; one assumes equal group variances, and the other assumes different
group variances. A test for equality of variances is also performed.

Specification

The analysis can be specified in one of two ways, depending on how the data are
stored. If the data from both groups are entered into a single variable, and a second
categorical variable is used to identify the two groups, use the Categorical method,
as illustrated below. Move the variable containing the observed data into the
Dependent Variable box. Move the variable that identifies the two groups into the

136 Statistix 10 User’s Manual


Categorical Variable box. To move a variable, highlight the variable in the
Variables box, then press the right-arrow button next to the box you want to move
it. When using the categorical method, you can specify more than one dependent
variable, in which case a separate report is displayed for each variable.

If the two groups are entered into Statistix as two variables, one for each group, use
the Table method (see page 141 for an example). Select the two variables and
move them to the Table Variables box.

Typically the null hypothesis is that the means are equal, or that the difference is
zero. You can enter a different value for the Null Hypothesis (µD) of the
difference. You can also select the Alternative Hypothesis: the two-sided
alternative “Difference … µD”, or a one-sided alternative Difference < µD” or
“Difference > µD”.

Confidence intervals for the mean difference are included in the report. Enter a
value in the C.I. Percent Coverage to specify width of the confidence intervals.

Data You may select up to 500 dependent variables. A separate report will be produced
Restrictions for each variable. The grouping variable used with the categorical method can be
of any data type (i.e., real, integer, date, or string). Real values are truncated to
whole numbers. Strings are truncated to 20 characters.

Example The data for this example come from Snedecor and Cochran (1980). The goal is to
compare the results of a standard, but slow, chemical analysis procedure with a
quicker, but potentially less precise, procedure. The variable Conc is used to store
the chemical concentrations determined by both methods. The variable Method
identifies the method used (1 = standard, 2 = quick) to determine the
concentration.

Case Conc Method


1 25 1
2 24 1
3 25 1
4 26 1
5 23 2
6 18 2
7 22 2
8 28 2
9 17 2
10 25 2
11 19 2
12 16 2

These data are available in the file Sample Data\Concentrations.sx. The analysis is
specified on the preceding page. The results are displayed on the next page.

Summary statistics for the two groups are given first, including the group means,

Chapter 6, One, Two, & Multi-Sample Tests 137


sample sizes, standard deviations, and standard errors of the means. The values for
SD and SE for the difference are computed using the pooled estimate of variance.

Two-Sample T Tests for Conc by Method

Method N Mean SD SE
Standard 4 25.000 0.8165 0.4082
Quick 8 21.000 4.2088 1.4880
Difference 4.0000 3.5496 2.1737

T-Tests for Mean Difference


Null Hypothesis: difference = 0
Alternative Hyp: difference … 0
Lower Upper
Method Variances DF T P 95% C.I. 95% C.I.
Pooled Equal 10 1.84 0.0956 -0.8433 8.8433
Satterthwaite Unequal 8.0 2.59 0.0320 0.4408 7.5592

Homogeneity of Variances DF F P
Folded F Test (two-sided) 7,3 26.57 0.0212

Cases Included 12 Missing Cases 0

The t-test statistics are given next using two methods. The pooled method t test is
testing the null hypothesis that means for the two groups are equal given that the
two groups have the same variances. The Satterthwaite t test tests the same null
hypothesis except that it doesn’t require the assumption that the variances of the
two groups are equal. A discussion of such tests is given in Snedecor and Cochran
(1980, p. 96-98). Note that the degrees of freedom for unequal variances are
expressed as a decimal number. It’s computed using Satterthwaite’s
approximation, described in Snedecor and Cochran. 95% confidence intervals of
the mean difference are included for both equal and unequal group variances. An
F test for the equality of the group variances is given after the t tests.

Snedecor and Cochran use the above example to illustrate how unequal variances
can influence the analysis. Evidence for a difference between two chemical
analyses is considerably weaker when equal variances are assumed (p=0.0956)
than when unequal variances are assumed (p=0.0320). When in doubt, it’s safer to
assume the variances are unequal. In our example, the Folded F test for equality of
variances lends strong support for assuming the variances are unequal (p=0.0106).

These t tests, as well as the F-test for equality of variances, are based on the
assumption the data are normally distributed. The Shapiro-Wilk Test and Normal
Probability Plot are useful for examining this assumption. You should consider
the Median Test or the Wilcoxon Rank Sum Test if non-normality is a problem.

Wilcoxon Rank Sum Test


Statistix computes the Wilcoxon Rank Sum Test, a nonparametric procedure that
tests for differences in the central values of samples from two independent
samples. This test can be performed with either of two statistics—the Wilcoxon

138 Statistix 10 User’s Manual


rank sum statistic or the Mann-Whitney U statistic. Statistix computes both
statistics. Both of these statistics are mathematically equivalent and always lead to
identical results. Exact p-values are given for small sample sizes. This test is
often almost as powerful as the Two-Sample T Test, and is usually more powerful
than the Median Test.

Specification

The analysis can be specified in one of two ways, depending on how the data are
stored. If the two groups are entered into Statistix as two variables, use the Table
method and move the two variables to the Table Variables list box.

If the data from both groups are entered into a single variable and a second
categorical variable is used to identify the two groups, use the Categorical method
as illustrated in the dialog box below. Move the variable containing the observed
data into the Dependent Variable box, and the variable that identifies the two
groups into the Categorical Variable box.

Example The data for this example come from Snedecor and Cochran (1980). The variable
Conc is used to store the chemical concentrations determined by two methods.
The variable Method is used to identify the method used (1 = standard, 2 = quick)
to determine the concentration. See page 137 for a listing of the data.

The analysis is specified above. The results are displayed on the next page.

All the data are combined and converted to ranks. Tied scores are assigned mean
ranks (Hollander and Wolfe 1973). Values are considered to be tied if they are
within 0.00001 of one another. The ranks for each group are then summed to get
the rank sum statistic for each group. If the distributions for the two groups are the
same, the average ranks should be “similar” for each group. The null hypothesis
being tested by the rank sum test is that the distributions for the two groups are the

Chapter 6, One, Two, & Multi-Sample Tests 139


same. Rejecting this hypothesis usually leads to the conclusion that the central
values for the two groups differ, although strictly you can only conclude that the
distributions for the two groups differ in some way (Bradley 1968).

Wilcoxon Rank Sum Test for Conc by Method

Method Rank Sum N U Stat Mean Rank


Standard 36.0 4 26.000 9.0
Quick 42.0 8 6.0000 5.3
Total 78.0 12

Exact two-tailed p-value 0.1071

Normal approximation, Z 1.62


Normal approx, two-tailed P 0.1042

Total number of values that were tied 3


Maximum difference allowed between ties 0.00001

Cases Included 12 Missing Cases 0

The Mann-Whitney U statistic corresponding to the rank sum is also given. When
sample sizes are small to moderate, exact p-values are calculated and displayed.
(Exact p-values are computed for total sample sizes of 26 or smaller.) For
moderate to large samples, the traditional normal approximation statistic and
associated two-tailed p-value is displayed.

The exact p-value, when reported, should always be used in preference to the
normal approximation. The exact p-value of 0.1071 doesn’t suggest that the two
chemical techniques are different.

Computational The algorithm for the exact p-value is given in Manly (1991). The corrections for
Notes continuity and ties for the normal approximation are given in Siegel and Castellan
(1988).

Median Test
The Median Test is a nonparametric two-sample test. It tests the hypothesis that
the medians for the two groups from which the samples were drawn are equal.

Specification The analysis can be specified in one of two ways, depending on how the data are
stored. If the data from both groups are entered into a single variable and a second
categorical variable is used to identify the two groups, use the Categorical method.
Move the variable containing the observed data into the Dependent Variable box,
and the variable that identifies the two groups into the Categorical Variable box.

140 Statistix 10 User’s Manual


If the two groups are entered into Statistix as two variables, select the Table
method as illustrated in the dialog box above. Move the two variables to the Table
Variables list box.

Data Fisher’s Exact Test is reported for sample sizes of 5000 or less. The chi-square
Restrictions value is reported for sample sizes of ten or more.

Example The data for this example come from Snedecor and Cochran (1980). The goal is to
compare the results of a standard, but slow, chemical analysis procedure with a
quicker, but potentially less precise, procedure. The variable Standard stores the
chemical concentrations determined using the standard method, and the variable
Quick stores the concentrations determined using the quicker method.

Case Standard Quick


1 25 23
2 24 18
3 25 22
4 26 28
5 M 17
6 M 25
7 M 19
8 M 16

The analysis is specified above. The results are displayed on the next page

The first step is to find the median for all of the data, which for our example was
23.5. The number of values above and below the median in each sample is tallied,
and the two by two table displayed above is created. The number of ties with the
median is also displayed, but this information isn’t used in the calculations. A
value is considered to be tied with the median if it differs by no more than

Chapter 6, One, Two, & Multi-Sample Tests 141


0.00001.

Median Test for Conc by Method

Standard Quick Total


Above Median 4 2 6
Below Median 0 6 6
Total 4 8 12
Ties with Median 0 0 0

Median 23.500
Fisher's Exact 0.0606
Chi-Square 6.00
DF 1
P-value 0.0143

Max. diff. allowed between a tie 0.00001

Cases Included 12 Missing Cases 0


If the medians for the two groups are equal, we would expect “similar” numbers of
values within each group to fall above and below the median. The null hypothesis
being tested is that the medians for the two groups are equal. The test amounts to a
typical chi-square test for independence or heterogeneity, performed on the two by
two table.

The chi-square value in the example is 6.00, which results in a p-value of 0.0143.
This supports the idea that the chemical analysis procedures are different. Fisher’s
Exact Test is computed for small samples, and is the preferred test when reported.
The value 0.0606 gives less support than the chi-square test that the procedures are
different. See Two by Two Tables for more information about Fisher’s Exact Test.

One-Way AOV
This procedure performs a one-way analysis of variance. The One-Way AOV
provides statistics for both the fixed effects (Type I) model and the random effects
(Type II) model. It also tests for equality of variances between levels, and there are
options to perform multiple comparisons of means, contrasts, residual plots, and
save fitted values and residuals. The one-way AOV is equivalent to the
Completely Randomized Design discussed in Chapter 9.

Specification To use the One-Way AOV procedure, you can organize your data in one of two
ways. In the Table method, you create one variable for each of the treatments, then
enter the responses observed for each treatment in its own variable. Your second
option is to create a single dependent variable and enter all of the responses
observed for all of the treatments. Then create a second variable with categorical
values (e.g., 1, 2, 3 ...) that represent the treatments. This is called the Categorical
method. Both of these methods are illustrated below.

142 Statistix 10 User’s Manual


Data Sample sizes within treatment levels can be unequal. The maximum number of
Restrictions treatment levels is 500. The treatment variable used with the categorical method
can be of any data type. Real values are truncated to whole numbers. Strings are
truncated to 20 characters.

Example The example below is from Snedecor and Cochran (1980, p. 216). The grams of
fat absorbed by batches of doughnuts were measured using four types of fat. The
fat absorbed is the response, the fat types are the treatments. To illustrate the
Table method of model specification first, suppose we entered the responses using
four variables, Fat1, Fat2, Fat3, and Fat4, each representing one of the four
treatments (see data file Sample Data\Doughnuts.sx).

Case Fat1 Fat2 Fat3 Fat4


1 64 78 75 55
2 72 91 93 66
3 68 97 78 49
4 77 82 71 64
5 56 85 63 70
6 95 77 76 68

The model is specified in the dialog box below. The Table method was selected
and the four variables moved to the Table Variables box.

The results are displayed on the next page. A standard analysis of variance table is
displayed first. Note that the F test suggests a substantial between-groups (fat
types) effect, with a p-value of 0.0069.

The F test assumes that the within-group variances are the same for all groups.
Three homogeneity of variances tests appear below the AOV table to test this
assumption. The null hypothesis of these tests is that the variances are equal. A
large F test and corresponding small p-value (say, smaller than 0.05) is evidence

Chapter 6, One, Two, & Multi-Sample Tests 143


that there are differences. All three tests agree that the variances are equal in our
example. See Levene (1960), O’Brien (1981), and Brown and Forsythe (1974).

One-Way AOV for: Fat1 Fat2 Fat3 Fat4

Source DF SS MS F P
Between 3 1636.50 545.500 5.41 0.0069
Within 20 2018.00 100.900
Total 23 3654.50

Grand Mean 73.750 CV 13.62

Homogeneity of Variances F P
Levene's Test 0.75 0.5351
O'Brien's Test 0.59 0.6270
Brown and Forsythe Test 0.34 0.7942

Welch's Test for Mean Differences


Source DF F P
Between 3.0 7.48 0.0054
Within 11.0

Component of variance for between groups 74.1000


Effective cell size 6.0

Variable Mean
Fat1 72.000
Fat2 85.000
Fat3 76.000
Fat4 62.000
Observations per Mean 6
Standard Error of a Mean 4.1008
Std Error (Diff of 2 Means) 5.7994

The Welch’s test for mean differences (also called Welch’s ANOVA) is an
alternative to the F test in the standard AOV table appropriate for analyses where
homogeneity of variances are suspect (Welch, 1951).

A fixed-effects model (Type I) is appropriate for these data. If a random-effects


model were appropriate (Type II), the component of variance for between groups
may be of interest (see Snedecor and Cochran, chap. 13). The between-groups
variance component and effective cell sample size are displayed below Welch’s
test. The computation of effective cell size is described on page 246 of Snedecor
and Cochran.

The bottom portion of the report lists a table of within-group means, sample sizes,
and standard errors of the means. The standard error of the difference of two
means is reported when the sample sizes are equal.

We’ll use the same analysis to illustrate the Categorical method of model
specification. We now create two variables, a dependent variable FatAbsorb and
the treatment variable FatType. The data are listed below, and are available in the
file Sample Data\Doughnuts.sx.

144 Statistix 10 User’s Manual


Case FatAbsorb FatType
1 64 1
2 72 1
3 68 1
4 77 1
5 56 1
6 95 1
7 78 2
8 91 2
9 97 2
10 82 2
11 85 2
12 77 2
13 75 3
14 93 3
15 78 3
16 71 3
17 63 3
18 76 3
19 55 4
20 66 4
21 49 4
22 64 4
23 70 4
24 68 4

The model is specified in the dialog below.

One-Way
AOV Results
Menu

Once the one-way AOV is computed and displayed, the Results menu icon, which
appears as a shovel (think “dig-deeper”), on the toolbar is enabled. Click on the

Chapter 6, One, Two, & Multi-Sample Tests 145


icon to display the One-Way AOV results menu.

Select AOV Table to have the AOV table and means displayed again. The
remaining procedures are discussed briefly below; see Chapter 9 for a thorough
discussion.

Multiple The Multiple Comparisons procedures are used to compare the means of the
Comparisons different groups. Other names for these procedures are mean separation tests,
multiple range tests, and tests for homogeneity of means. The multiple
comparisons tests performed by Statistix are divided into three categories: all-
pairwise comparisons, comparisons with a control, and comparisons with the best.
See Multiple Comparisons in Chapter 9 for a discussion of these procedures with
examples of each.

Contrasts This procedure lets you compute a test for any linear contrast of the group means.
Linear contrasts are linear combinations of the means, and they’re valuable for
examining the “fine structure” of the data after the overall F test indicates that the
treatment effect is significant. The test computes the contrast value, sums of
squares for the contrast, Scheffe’s F, and Student’s t-statistic. See Contrasts in
Chapter 9 for details and an example.

Polynomial This procedure computes the polynomial decomposition of the treatment sums of
Contrasts squares. This is useful for determining the existence and nature of trends (i.e.,
linear, quadratic, etc.) in the treatment level means. See Polynomial Contrasts in
Chapter 9 for details and an example.

Plots The Plots submenu offers three plots. The Means Plot produces a line-plot or a
bar-chart of the means for each group. See Means Plot in Chapter 9 for an
example.

The Residuals By Fitted Values plot is useful for examining whether the
variances are equal among the groups. If the order of the groups is meaningful,
then systematic departures from equality can be seen in the plot.

The Normal Probability Plot plots the residuals against the rankits. Plots for
normal data form a straight line. The Shapiro-Wilk statistic for normality is also
reported on the plot. See Chapter 12 for details.

Save The Save Residuals procedure is used to save the fitted values and residuals in new
Residuals or existing variables for later analysis. This option is only available if you use the
Categorical method of model specification.

146 Statistix 10 User’s Manual


You can saved fitted values and/or residuals in new variables by typing variable
names in the spaces provided, as shown below.

You can also save them in existing variables by clicking on the arrows to display a
drop-down list of variables in your data file. The fitted value for an observation in
a one-way AOV is the class mean. The residuals are computed as the observed
value minus the fitted value.

Kruskal-Wallis One-Way AOV


The Kruskal-Wallis One-Way AOV procedure performs a nonparametric one-
way analysis of variance. The Kruskal-Wallis statistic is computed as are the
results of a parametric one-way analysis of variance applied to the ranks.

Specification

To use the Kruskal-Wallis One-Way AOV procedure, you can organize your data
in one of two ways. The Table method is where you enter the responses observed
for each of the treatments in its own variable. The Categorical method is where
you enter all of the observed responses in a single dependent variable and enter the

Chapter 6, One, Two, & Multi-Sample Tests 147


treatment levels in a second grouping variable. The Table method is illustrated in
the example below. See page 137 for an example of the Categorical method.

Data Sample sizes within treatment levels can be unequal. The maximum number of
Restrictions treatment levels is 500. The treatment variable used with the categorical method
can be of any data type. Real values are truncated to whole numbers. Strings are
truncated to 20 characters.

Example The example data are from Snedecor and Cochran (1980, p. 216) and are also used
as example data for the One-Way AOV procedure. The grams of fat absorbed by
batches of doughnuts were measured using four types of fat. The fat types are the
treatments, and the fat absorbed is the response.

We’ll illustrate the Table method of model specification. Suppose we enter the
responses using four variables—Fat1, Fat2, Fat3, and Fat4. Each variable
represents one of the four treatments. The data are listed below, and are available
in the data file Sample Data\Doughnuts.sx.

Case Fat1 Fat2 Fat3 Fat4


1 64 78 75 55
2 72 91 93 66
3 68 97 78 49
4 77 82 71 64
5 56 85 63 70
6 95 77 76 68

The model is specified in the dialog box on the preceding page. The Table method
was selected and the four variables moved to the Table Variables box. The results
are displayed below..

Kruskal-Wallis One-Way Nonparametric AOV

Mean Sample
Variable Rank Size
Fat1 11.3 6
Fat2 19.5 6
Fat3 13.6 6
Fat4 5.7 6
Total 12.5 24

Kruskal-Wallis Statistic, corrected for ties 11.85


P-Value, Using Beta Approximation 0.0023
P-Value, Using Chi-Squared Approximation 0.0079

Parametric AOV Applied to Ranks


Source DF SS MS F P
Between 3 590.58 196.861 7.06 0.0020
Within 20 557.42 27.871
Total 23 1148.00

Total number of values that were tied 8


Max. diff. allowed between ties 0.00001

Cases Included 24 Missing Cases 0

148 Statistix 10 User’s Manual


The Kruskal-Wallis test is a generalization of the Wilcoxon rank sum test. The
data are first ranked irrespective of group. Tied values are assigned their average
rank (Hollander and Wolfe 1973). Values are assumed to be tied if they are within
0.00001 of one another. If each of the groups had similar distributions, the mean
ranks for all groups would be expected to be “similar”. The null hypothesis being
tested is that each of the groups has the same distribution. Strictly speaking, if the
null hypothesis is rejected, the alternative is that the distributions for the groups
differ, although in practice it’s typical to assume that the differences are due to
differences in the central values of the groups.

The Kruskal-Wallis statistic reported includes the correction for ties. The p-value
based on the beta distribution is the more accurate approximation (Meyer and
Seaman, 2011). The chi-squared approximation is also reported. The analysis of
the example is consistent with the parametric One-Way AOV. The p-value of
0.0023 suggests that the mean ranks for the groups are dissimilar enough to
conclude that the fat types differ.

Conover and Iman (1981) proposed first ranking the data and then applying the
usual parametric procedure for computing a one-way analysis of variance. The
results of this procedure are displayed underneath the Kruskal-Wallis test. The
results are interpreted in the same way as the usual analysis of variance, comparing
the within-group variance to the between-group variance. Please note that the
usual F test is generally anti-conservative, giving significant results more often
than it should (Iman and Davenport 1976, 1980).

Dunn’s All- Once the Kruskal-Wallis AOV is computed and displayed, the Results menu icon,
Pairwise which appears as a shovel (think “dig-deeper”), on the toolbar is enabled. Click on
Comparisons the icon to display the results menu.

Dunn’s All-Pairwise Comparisons procedure identifies subsets of similar


(homogeneous) mean ranks while controlling the experimentwise error rate.

To use the multiple comparisons procedure, you enter a value for alpha, the
rejection level, and select the report format. The results for the doughnut fat
absorption example described on page 148 using the Homogeneous Groups report
format are given on the next page.

Chapter 6, One, Two, & Multi-Sample Tests 149


Dunn's All-Pairwise Comparisons Test

Mean
Variable Rank Homogeneous Groups
Fat2 19.50 A
Fat3 13.58 AB
Fat1 11.25 AB
Fat4 5.67 B

Alpha 0.05 Standard Error for Comparison 4.0789


Critical Z Value 2.638 Critical Value for Comparison 10.761
There are 2 groups (A and B) in which the means
are not significantly different from one another.

The mean ranks are sorted in descending order so the largest one is listed in the
first row. The columns of letters under the heading “Homogeneous Groups” indi-
cate which means aren’t significantly different from one another. Group A
contains the mean ranks for Fat2, Fat3, and Fat1. Group B contains the mean
ranks for Fat3, Fat1, and Fat4. We conclude that Fat2 is different from Fat4 since
neither group A or B contains both fat types.

The Triangle Matrix report format makes it easier to identify pairs of means that
are different. An example for the same data appears below. The numbers in the
body of the triangular shaped table are differences between mean ranks.
Significant differences are indicated with an asterisk.

Dunn's All-Pairwise Comparisons Test

Mean
Variable Rank Fat1 Fat2 Fat3
Fat1 11.25
Fat2 19.50 8.25
Fat3 13.58 2.33 5.92
Fat4 5.67 5.58 13.83* 7.92

Alpha 0.05 Standard Error for Comparison 4.0789


Critical Z Value 2.638 Critical Value for Comparison 10.761

The comparison procedure controls the experimentwise error rate. Like the
Bonferroni comparison of means procedure for the parametric AOV (see Chapter
9), Dunn’s test becomes increasingly conservative as the number of means
increases. A larger than normal rejection level (e.g., 0.10-0.25) is often used when
testing large numbers of means. See Dunn (1964) or Daniel (1990) for details.

Statistix’s implementation of Dunn’s test includes the correction for ties.

150 Statistix 10 User’s Manual


Friedman Two-Way AOV
The Friedman nonparametric two-way analysis of variance is used to analyze two-
way designs without replication. The results are equivalent to Kendall’s
coefficient of concordance (Conover, 1999).

Specification The analysis can be specified in one of two ways, depending on how the data are
arranged in the variables. A two-way analysis of variance requires that each
observation be classified by two factors. For the Categorical method, all
observations of the Dependent Variable are in one variable. The two factors are
indicated by two Categorical Variables. This is illustrated in the dialog box
below.

In the Table method, the levels for the one factor are represented by the variables
themselves. The levels for the other factor are then represented by the cases. To
specify the model, move the names of the variables that represent the column
factor to the Table Variables box (see the example on page 153).

Data There can be only one observation per cell; no replication is permitted. Missing
Restrictions values can’t be included. You can have up to 500 levels in each of the two
treatment factors.

Example This example is a randomized block design used in Snedecor and Cochran (1980,
sect. 14.2). The same data are used as an example in the Randomized Complete
Block Design section of Chapter 9, where a parametric two-way analysis of
variance is computed. The dependent variable is the number of soybeans out of
100 that failed to emerge, and the treatments were various fungicides (the first
treatment level was a no-fungicide control).

Chapter 6, One, Two, & Multi-Sample Tests 151


To illustrate the Categorical method of model specification, we entered the
observed counts into a single variable named Failures. The fungicides were num-
bered 1 through 5 and entered into a variable named Trt. The blocks (replicates)
were numbered 1 through 5 and entered into a variable named Blk. The data are
listed below, and are available in the file Sample Data\Soybeans.sx.

Case Failures Trt Blk Case Failures Trt Blk


1 8 1 1 14 8 3 4
2 10 1 2 15 10 3 5
3 12 1 3 16 3 4 1
4 13 1 4 17 5 4 2
5 11 1 5 18 9 4 3
6 2 2 1 19 10 4 4
7 6 2 2 20 6 4 5
8 7 2 3 21 9 5 1
9 11 2 4 22 7 5 2
10 5 2 5 23 5 5 3
11 4 3 1 24 5 5 4
12 10 3 2 25 3 5 5
13 9 3 3

The Transformation Cat function can be used to generate repetitive sequences, like
those seen for Trt and Blk. After entering the 25 values for Failures, we can use
the Transformation expressions Trt = Cat(5,5) and Blk = Cat(5,1) to create these
variables.

The analysis is specified on the preceding page. The results are presented below.

Friedman Two-Way Nonparametric AOV of Failures by Trt Blk

Test Statistics for Trt DF P


Chi-square, Corrected for Ties 9.35 4 0.0530
F(Conover) 3.51 4,16 0.0307

Mean
Trt Rank
Control 4.70
Fung #1 2.20
Fung #2 3.40
Fung #3 2.50
Fung #4 2.20
Observations per Mean 5

Test Statistics for Blk DF P


Chi-square, Corrected for Ties 5.31 4 0.2573
F(Conover) 1.44 4,16 0.2651

Mean
Blk Rank
1 1.80
2 3.10
3 3.50
4 3.90
5 2.70
Observations per Mean 5

Max. diff. allowed between ties 0.00001

Cases Included 25 Missing Cases 0

For the first factor, which in this case is Trt, the observations are first ranked
within the second factor (Blk). If there were no differences between the treatment
levels, the mean ranks (averaged across blocks) for the different treatment levels

152 Statistix 10 User’s Manual


would be expected to be “similar”. Tied observations are given a mean rank
(Hollander and Wolfe 1973). Values are considered to be tied if they are within
0.00001 of one another. “Corrected for Ties” appears in the display when the
Friedman statistics is based on data that contain ties. For this example, ties within
blocks were found. It appears that there are definite treatment effects because the
p-value is fairly small (0.0307).

The report includes the traditional chi-square test. Conover (1999) recommends
the following F statistic, which has a more accurate approximation:
F = (n-1)χ2 / [n(k-1)-χ2]
where k is the number of treatments and n is the sample size.

To examine block effects, the role of the variables is reversed. The observations
are now ranked within treatment levels. Ties among the observations within
treatment levels were found, as indicated by the message that the Friedman statistic
is corrected for ties. There appears to be little evidence of block effects (0.2651).
As with parametric analysis of variance, testing the block effect generally isn’t of
much interest.

To use the Table method, we’d choose one factor, say fungicide, to represent
columns and enter the data for each fungicide into a separate variable. The cases
then represent the blocks (replicates).

Case Control Fung1 Fung2 Fung3 Fung4


1 8 2 4 3 9
2 10 6 10 5 7
3 12 7 9 9 5
4 13 11 8 10 5
5 11 5 10 6 3

The order of the cases is very important with this format. The first case
corresponds to the first experimental block, the second case corresponds to the
second block, and so on. The model is then specified by selecting the table method
of Model Specification, and moving the variables that represent the fungicide
treatments to the Table Variables box.

The Friedman test is often performed as a companion analysis to the parametric


two-way analysis of variance, especially when the assumption of normality in
parametric analysis of variance is suspect. It’s not as powerful as the parametric
analysis, but it usually performs quite well.

Friedman All- Once the Friedman AOV is computed and displayed, the Results menu icon, which
Pairwise appears as a shovel (think “dig-deeper”), on the toolbar is enabled. Click on the
Comparisons icon to display the results menu.

The Friedman All-Pairwise Comparisons procedure is used to identify


individual comparisons among all pairs of mean ranks that are significantly
different. To use the multiple comparisons procedure, you select a comparison
method, enter a value for alpha, the rejection level, and select the report format.

Chapter 6, One, Two, & Multi-Sample Tests 153


There are two comparison methods available. The first method was suggested by
Conover (1999). Two mean ranks are considered different if the difference
exceeds the critical value:
t(1-α/2, (k-1)(n-1)) [2(nGGR2-GT2)]-1 / n
where k is the number of treatments, n is the sample size per treatment, R is an
individual rank, and T is a mean rank.

Dunn’s method controls for experimentwise error rate by including the Bonferroni
adjustment, resulting in a more conservative test (Demsar, 2006). The critical
value for a difference is:
z(α/(k(k-1)/2)) (n(n-1)/6n)-1
where k is the number of treatments, n is the sample size per treatment.

The results for the soybean example described on page 151 using the Triangular
Matrix report format are given below.

Conover's All-Pairwise Comparisons Test of Failures by Trt Blk

Mean
Trt Rank Control Fung #1 Fung #2 Fung #3
Control 4.70
Fung #1 2.20 2.50*
Fung #2 3.40 1.30 1.20
Fung #3 2.50 2.20* 0.30 0.90
Fung #4 2.20 2.50* 0.00 1.20 0.30

Alpha 0.05
Critical T Value 2.120 Critical Value for Comparison 1.7124

The numbers in the body of the triangular shaped table are differences between
mean ranks. Significant differences are indicated with an asterisk.

An example of the Homogeneous Groups report format can be found on page 150,
150.

154 Statistix 10 User’s Manual


C H A P T E R

7
Hypothesis Tests
The procedures in this menu allow you to perform hypothesis tests of proportions,
means, and correlations using only summary statistics (e.g., sample size, mean, and
standard deviation) rather than the raw data. These procedures also compute
confidence intervals for proportions, means, and correlations.

One Proportion

The One Proportion procedure is used to test whether a proportion differs from a
hypothesized value. It also provides a confidence interval for the proportion.

You specify the observed proportion by entering values for Sample Size and No.
Successes. The observed proportion is computed for you as the number of
successes divided by the sample size. Enter a hypothesized value of the proportion
in the Null Hypothesis (p0) box. Select the two-sided alternative hypothesis “p …
p0”, or a one-sided alternative “p < p0” or “p > p0”. Enter a value for the percent
coverage for confidence intervals.

The report for the test specified in the dialog box above is shown on the next page.

The observed proportion and the difference between it and the hypothesized value
are displayed. Two versions of the test statistic Z are given: uncorrected and
corrected for continuity. The exact test is computed using the binomial
distribution (Rosner, 2000). Reject the null hypothesis when the p-value is small

Chapter 7, Hypothesis Tests 155


(say, p < 0.05).

Hypothesis Test - One Proportion

Sample Size 90
Successes 48
Proportion 0.53333

Null Hypothesis: P = 0.5


Alternative Hyp: P … 0.5

Difference 0.03333
Standard Error 0.05259
Z (uncorrected) 0.63 P 0.5271
Z (corrected) 0.53 P 0.5982
Exact Test P 0.5984

Method 95% Confidence Interval


Simple Asymptotic (0.43026, 0.63640)
Simple Asymptotic with CC (0.42471, 0.64196)
Wilson Score (0.43102, 0.63292)
Wilson Score with CC (0.42563, 0.63817)
Clopper-Pearson (0.42513, 0.63928)
Notes on C.I.: 1) CC means continuity correction.
2) Wilson Score method with CC is the preferred method.

Confidence intervals are displayed using five methods. The intervals using the
familiar asymptotic methods (also called Wald’s method) with and without
continuity correction are displayed. The Wilson Score methods (with and without
continuity correction) are refinements of the simple asymptotic methods and are
preferred. Consult Newcombe (1998). The Clopper-Pearson interval is based on
the binomial distribution (Clopper and Pearson, 1934).

Two Proportions
The Two Proportions procedure is used to compare proportions in two
independent samples. The null hypothesis tested is that the two proportions are the
same. A confidence interval for the mean difference is given.

156 Statistix 10 User’s Manual


The tests requires you to enter values for the sample sizes and the number of
successes (the number of times the event of interest was observed). Select the
two-sided alternative hypothesis “p1 … p2”, or a one-sided alternative “p1 < p2” or
“p1 > p2”. Enter a value for the percent coverage for confidence intervals.

The results for the test specified in the dialog box above are displayed below.

Hypothesis Test - Two Proportions

Sample
Size Successes Proportion
Sample 1 30 20 0.66667
Sample 2 36 18 0.50000
Difference 0.16667

Null Hypothesis: p1 = p2
Alternative Hyp: p1 … p2

SE (difference) 0.12218
Z (uncorrected) 1.36 P 0.1725
Z (corrected) 1.11 P 0.2653
Fisher's Exact 0.2152

95% Confidence Interval of Difference


-0.07279 < p1-p2 < 0.40613

The report displays the computed proportions for the two samples and the
difference. Two versions of the normal approximation tests are provided: the
uncorrected test and the test corrected for continuity. P-values for both tests are
given. When the combined sample size is less than 5000, Fisher’s exact test is also
displayed. Fisher’s exact test, when available, is the preferred test. In the
example, the p-values for all three tests agree: there is no evidence that the two
proportions are different. The confidence interval reported is computed using the
normal approximation.

One Mean
The One Mean procedure tests whether a mean differs from a hypothesized value.
It also provides a confidence interval for the mean.

The test requires you to enter the sample size, the mean, and the standard
deviation. Enter a value for the hypothesized value of the mean in the Null
Hypothesis (µ0) box. Select the two-sided alternative hypothesis “µ … µ0”, or a
one-sided alternative “µ < µ0” or “µ > µ0”. Enter a value for the percent coverage
for confidence intervals.

Chapter 7, Hypothesis Tests 157


This procedure is equivalent to the one-sample t test. See page 130 for an example
and description of the report.

Two Means
The Two Means procedure is used to test whether two means are the same. It also
provides a confidence interval for the difference of the two means.

The test requires you to enter the sample size, the mean, and standard deviation for
each sample. Enter a value for the hypothesized value of the mean difference in
the Null Hypothesis (µ1-µ2) box (a value of zero tests whether the two means are
the same). Select an Alternative Hypothesis: the two-sided alternative hypothesis
“µ1-µ2 … µ0”, or a one-sided alternative “µ1-µ2 < µ0” or “µ1-µ2 > µ0”. Enter a
value for the percent coverage for confidence intervals.

This procedure is equivalent to the two-sample t test. See page 136 for an example
and description of the report.

158 Statistix 10 User’s Manual


One Variance
The One Variance procedure is used to test whether a sample variance differs
from a hypothesized value. It also provides a confidence interval for the variance.

Specify the Sample Size and the observed Variance (σ2). Enter a hypothesized
value of the variance in the Null Hypothesis (σ02) box. Select the two-sided
alternative hypothesis “σ2 … σ02”, or a one-sided alternative “σ2 < σ02” or “σ2 > σ02”.
Enter a value for the percent coverage for confidence intervals.

A common use for this test is to determine whether a new method increases the
variance compared to a standard method, in which case a one-sided test is
appropriate. The report for the test specified in the dialog box above is shown
below.

Hypothesis Test - One Variance

Null Hypothesis σ² = 676


Alternative Hyp σ² > 676
N 20
Variance 1225.0

DF 19
Chi-Square 34.43
P 0.0163
95% C.I. Lower Bound 708.47
95% C.I. Upper Bound 2613.3

The chi-square test statistic is computed χ2 = df @ σ2/ σ02 (Snedecor and Cochran,
1980). The p-value displayed is computed using the chi-square function in
Probability Functions (see Chapter 17). Reject the null hypothesis when the p-
value is small (say, p < 0.05).

The confidence interval is computed using percentage points of the chi-square


distribution (Snedecor and Cochran, 1980).

Chapter 7, Hypothesis Tests 159


Two Variances
The Two Variances procedure is used to test the null hypothesis that two
variances from independent random samples from normal populations are the
same. It’s the same test used to test for equality of variances for the Two-sample T
Test.

Specify values for the Sample Size and Variance for the two samples. The null
hypothesis is that the variances are the same. Select the two-sided alternative
hypothesis “σ12 … σ22”, or a one-sided alternative “σ12 < σ22” or “σ12 > σ22”.

The report for the test specified in the dialog box above is shown below.

Hypothesis Test - Two Variances

Null Hypothesis σ1² = σ2²


Alternative Hyp σ1² … σ2²
Sample Size 1 10
Variance 1 0.5890
Sample Size 2 10
Variance 2 0.0270

DF 9,9
F 21.81
P 0.0001

The test criterion is F = σ12 / σ22 where σ12 is the larger of the two variances
(Snedecor and Cochran, 1980). The p-value displayed is computed using the F
function in Probability Functions (see Chapter 17). Reject the null hypothesis
when the p-value is small (say, p < 0.05).

160 Statistix 10 User’s Manual


One Correlation
The One Correlation procedure is used to test whether a correlation differs from a
hypothesized value. It also provides a confidence interval for the correlation.

Specify the Sample Size and the observed Correlation Coefficient. Enter a
hypothesized value of the correlation the Null Hypothesis (ρ0) box. Select the
two-sided alternative hypothesis “ρ … ρ0”, or a one-sided alternative “ρ < ρ0” or “ρ
> ρ0”. Enter a value for the percent coverage for confidence intervals.

The report for the test specified in the dialog box above is shown below.

Hypothesis Test - One Correlation Coefficient

Null Hypothesis ρ = 0.00


Alternative Hyp ρ … 0.00
N 50
R 0.25

T 1.79
DF 48
P 0.0799

Method 95% Confidence Interval


Normal Approximation (-0.0305, 0.4940)
Exact (-0.0282, 0.4962)

The test applies a T statistic when the null hypothesis is ρ = 0 and a Z statistic
using Fisher’s transformation when the null hypothesis that ρ equals a number of
than zero (Rosner, 2000). Reject the null hypothesis when the p-value is small
(say, p < 0.05).

Confidence intervals are displayed using two methods: The normal approximation
method (Rosner, 2000), and the exact method based on the cumulative distribution
for correlations (Guenther, 1977).

Chapter 7, Hypothesis Tests 161


Two Correlations
The Two Correlations procedure is used to compare correlations in two
independent samples. The null hypothesis tested is that the two correlation
coefficients are the same.

The tests requires you to enter values for the sample sizes and sample coefficients
for the two correlations. Select the two-sided alternative hypothesis “ρ1 … ρ2”, or
a one-sided alternative “ρ1 < ρ2” or “ρ1 > ρ2”. Enter a value for the percent
coverage for confidence intervals. The results for the test specified in the dialog
box are displayed below.

Hypothesis Test - Two Correlation Coefficients

Null Hypothesis ρ1 = ρ2
Alternative Hyp ρ1 … ρ2
Sample Size 1 50
Coefficient 1 0.12
Sample Size 2 100
Coefficient 2 0.20

Z -0.46
P 0.6439

The report displays the normal approximation test statistic Z and the associated p-
value. Reject the null hypothesis when the p-value is small (say, p < 0.05).

162 Statistix 10 User’s Manual


C H A P T E R

8
Linear Models
Statistix offers you a comprehensive selection of linear model procedures, which
include regression, analysis of variance, and analysis of covariance. Linear models
are among the most powerful and popular tools available for data analysis.

The Correlations (Pearson) procedure displays the correlation matrix for a set of
variables.

The Partial Correlations procedure computes the correlations of a set of


independent variables with a dependent variable after adjusting for the effects of
another set of independent variables. It’s a valuable tool when working with
multiple regression.

The Variance-Covariance procedure displays the sample variance-covariance


matrix for a set of variables.

The Linear Regression procedure performs simple and multiple linear regression.
You can compute predicted values and prediction intervals for any set of
independent variable values. Extensive residual analysis options are available for
model evaluation. The sensitivity of the regression coefficients to errors in the
independent variables can be examined.

The Best Subset Regressions procedure generates a list of “best” subset models
for a specified regression model.

The Stepwise Linear Regression procedure performs forward and backward


stepwise linear regression in search of good subset regression models.

The Logistic Regression procedure is appropriate for a situation where the


dependent variable consists of “binary” data. Common examples of binary data
are yes or no responses and success or failure outcomes.

The Stepwise Logistic Regression procedure performs forward and backward


stepwise logistic regression.

The Poisson Regression procedure is appropriate for situations where the


dependent variable consists of discrete counts. See the section titled Analyzing
Proportions and Counts on page 207 for more background on when and why
Poisson Regression and Logistic Regression are useful.

Chapter 8, Linear Models 163


The Two Stage Least Squares Regression procedure is used to develop a
prediction equation when one or more of the predictor variables, or right hand side
variables, is an endogenous variable. An endogenous variable is one that’s
determined by the system of equations being solved. The model also requires at
least one exogenous variable. An exogenous variable is one whose value is
determined outside the system of equations.

The Eigenvalues-Principal Components procedure computes the eigenvectors


and eigenvalues and the principal components for a set of variables. It’s often used
in regression when the independent variables are highly correlated, and it’s also a
useful multivariate analysis in its own right.

Analysis of Variance is a powerful and popular subset of linear models. Statistix


offers a long list of AOV designs: completely randomized, complete block, Latin
square, balanced lattice, factorial, split-plot, strip-plot, split-split-plot, split-strip-
plot, and repeated measures designs. There are so many AOV procedures and
options that we’ve devoted Chapter 9 to them. The Analysis of Variance menu
appears on the main Statistics menu below the entry for Linear Models.

Correlations (Pearson)
The Correlations procedure computes a correlation matrix for a list of variables.
Correlations, also called Pearson or product-moment correlations, indicate the
degree of linear association between variables.

Specification

Select the variables you want correlations computed for. Highlight the variables

164 Statistix 10 User’s Manual


you want in the Variables list, then press the right-arrow button to move the
highlighted variables to the Correlations Variables list box. If you want to use a
weighting factor, move the name of the variable containing the weights to the
Weight Variable box. Use the Fit Constant check box to specify a model with a
constant fitted (checked) or a model forced through the origin (not checked).

Check the Compute P-Values check box to have p-values for the correlation
coefficients displayed. Check the box for Cronbach’s Alpha to have Statistix
display values for Cronbach’s alpha and the standardized version of Cronbach’s
alpha. Alpha is a measure of reliability of a psychometric or other instrument. If
the variables represent a set of test items, the value measures the extent to which
the set of items measure a single latent variable. Check the Fleiss' Kappa box to
display kappa, a measure for assessing the reliability of agreement between a fixed
number of raters when assigning categorical ratings to a number of items or
classifying items.

Data You can select up to 200 variables. If a case in your data has missing values for
Restrictions any of the variables selected, the entire case is deleted (listwise deletion). Negative
weights aren’t allowed.

Example We’ll use the Hald data from Draper and Smith (1966) for our example. The
variable Heat is the cumulative heat of hardening for cement after 180 days. The
variables Chem1, Chem2, Chem3, and Chem4 are the percentages of four
chemical compounds measured in batches of cement. The data are listed below
(see also Sample Data\Hald.sx).

Case Heat Chem1 Chem2 Chem3 Chem4


1 78.5 7 26 6 60
2 74.3 1 29 15 52
3 104.3 11 56 8 20
4 87.6 11 31 8 47
5 95.9 7 52 6 33
6 109.2 11 55 9 22
7 102.7 3 71 17 6
8 72.5 1 31 22 44
9 93.1 2 54 18 22
10 115.9 21 47 4 26
11 83.8 1 40 23 34
12 113.3 11 66 9 12
13 109.4 10 68 8 12

The model specification is displayed on the preceding page. The results are
displayed below.

Chapter 8, Linear Models 165


Correlations (Pearson)

Chem1 Chem2 Chem3 Chem4 Heat


Chem1 1.0000
Chem2 0.2286 1.0000
Chem3 -0.8241 -0.1392 1.0000
Chem4 -0.2454 -0.9730 0.0295 1.0000
Heat 0.7307 0.8163 -0.5347 -0.8213 1.0000

Cases Included 13 Missing Cases 0

Computational First, the matrix of sums of squares and cross products corrected for the means is
Notes calculated using the method of updating, also known as the method of provisional
means. The results are more accurate than if the usual “computational equations”
were used. If the model is fit without a constant, the matrix corrected for the
means is first computed and then “uncorrected”.

Partial Correlations
The Partial Correlations procedure computes the “residual” correlation between
variables after adjustment for correlations with another set of variables. Partial
correlations are often used in manual stepwise regression procedures to help you
decide which variable should be included next in the regression.

Specification

First select the Dependent Variable. Next select the independent variables for
which you want the correlations adjusted and move them to the Adjust For

166 Statistix 10 User’s Manual


Variables list. Then select the independent variables for which you want to
compute partial correlations and move them to the Correlation Variables list. The
resulting table displays a correlation coefficient between the Dependent Variable
and each of the Correlation Variables, adjusted for the Adjust For Variables.

If you want a weighting factor, move the variable containing the weights to the
Weight Variable box. Use the Fit Constant check box to specify a constant fitted
model (checked) or a model forced through the origin (not checked).

Data The total number of independent variables can’t exceed 200. Missing values or
Restrictions zero weights cause the entire case to be dropped. Negative weights aren’t allowed.

Example We’ll use the Hald data from Draper and Smith (1966) for our example. The
variable Heat is the cumulative heat of hardening for cement after 180 days. The
variables Chem1, Chem2, Chem3, and Chem4 are the percentages of four
chemical compounds measured in batches of cement. The data are listed on page
165 and can be obtained from the file Sample Data\Hald.sx.

The model specified in the dialog box on the preceding page is used to compute
the partial correlations for Chem2 and Chem3 on Heat, adjusted for Chem1 and
Chem4. The results are displayed below.

Partial Correlations with Heat


Controlled for Chem1 Chem4

Chem2 0.5986
Chem3 -0.5657

Cases Included 13 Missing Cases 0

The partial correlation of Chem2 with Heat after the effects of Chem1 and Chem4
have been removed is 0.5986. The partial correlation of Chem3 with Heat,
adjusted for Chem1 and Chem4, is !0.5657.

Variance-Covariance
The Variance-Covariance procedure computes the variances and covariances for
a list of variables.

Specification Select the variables for which you want variances and covariances computed.
Highlight the variables you want in the Variables list, then press the right-arrow
button to move them to the Var-Covar Variables list box.

Chapter 8, Linear Models 167


Select the name of a weighting variable for weighted variances-covariances. If you
specify a model without a constant (uncheck the Fit Constant check box), Statistix
computes the sums of squares and cross products uncorrected for the means.

Data Up to 200 variables can be specified. If a case in your data has missing values for
Restrictions any of the variables, the entire case is deleted (listwise deletion). Negative weights
aren’t permitted.

Example We’ll use the Hald data from Draper and Smith (1966) for our example. The
variable Heat is the cumulative heat of hardening for cement after 180 days. The
variables Chem1, Chem2, Chem3, and Chem4 are the percentages of four
chemical compounds measured in batches of cement. The data are listed on page
165.

The analysis is specified in the dialog box above. The results are displayed below.
The values displayed on the diagonal of the matrix are the variances, the off-
diagonal values are covariances.

Variance - Covariance Matrix

Chem1 Chem2 Chem3 Chem4 Heat


Chem1 34.6026
Chem2 20.9231 242.141
Chem3 -31.0513 -13.8782 41.0256
Chem4 -24.1667 -253.417 3.16667 280.167
Heat 64.6635 191.079 -51.5192 -206.808 226.314

Cases Included 13 Missing Cases 0

168 Statistix 10 User’s Manual


Linear Regression
The Linear Regression procedure performs simple and multiple linear regression.
Linear regression is a popular technique for examining linear relationships between
a response (dependent) variable and one or more predictor (independent) variables.
This procedure can perform both weighted and unweighted least squares fitting,
and you can specify no-intercept models. Extensive analysis of variance and
residual analysis options are available. You can also compute predicted values and
prediction intervals.

Specification

To specify a regression model, first select the dependent variable. Highlight the
variable in the Variables list, then press the right-arrow button next to the
Dependent Variable box to move the highlighted variable into that box. Select
one or more independent variables and move them to the Independent Variables
list box. For weighted regression, move the variable containing the weights to the
Weight Variable box. Use the Fit Constant check box to specify a constant fitted
model (checked) or a model forced through the origin (not checked).

Data You can include up to 200 independent variables in the model. If any values
Restrictions within a case are missing for any of the variables in the model, the case is dropped
(listwise deletion). If an independent variable is too highly correlated with a linear
combination of other independent variables in the model (collinearity), it’s
dropped from the model. If you specify a weight variable, cases with negative
weights are deleted.

Example We’ll use the data from Hald (1952) for our example. Draper and Smith (1966)
used the same data set to illustrate selecting the “best” regression equation. The

Chapter 8, Linear Models 169


variable Heat is the cumulative heat of hardening for cement after 180 days. The
variables Chem1, Chem2, Chem3, and Chem4 are the percentages of four chem-
ical compounds measured in batches of cement. The goal is to relate the heat of
hardening to the chemical composition. The data are listed on page 165 and are
available in the file Sample Data\Hald.sx.

The full model is specified in the dialog box on the preceding page. The results
are listed below.

Least Squares Linear Regression of Heat

Predictor
Variables Coefficient Std Error T P VIF
Constant 62.4054 70.0710 0.89 0.3991 0.0
Chem1 1.55110 0.74477 2.08 0.0708 38.5
Chem2 0.51017 0.72379 0.70 0.5009 254.4
Chem3 0.10191 0.75471 0.14 0.8959 46.9
Chem4 -0.14406 0.70905 -0.20 0.8441 282.5

R² 0.9824 Mean Square Error (MSE) 5.98295


Adjusted R² 0.9736 Standard Deviation 2.44601
AICc 42.944
PRESS 110.35

Source DF SS MS F P
Regression 4 2667.90 666.975 111.48 0.0000
Residual 8 47.86 5.98295
Total 12 2715.76

Cases Included 13 Missing Cases 0

The regression coefficient table gives the regression coefficients (slopes)


associated with the independent variables and their standard errors, t-statistics,
associated p-values, and variance inflation factors (VIF). You use p-values to test
whether the slopes are significantly different from zero if all other variables are
already in the model.

Large VIF’s in a multiple regression indicate that collinearity is a problem. The


VIF represents the increase in variance of a coefficient due to correlation between
the independent variables. Values of 7.0 or 10.0 have been suggested for the
cutoff of what constitutes a “high” value.

Statistix provides several other summary statistics (see Miscellaneous Regression


Topics on page 183) and an analysis of variance table for the regression, including
the F test and associated p-value for the significance of the overall model. In our
example, the overall F is 111.48, with a p-value of 0.0000. This indicates that at
least some of the independent variables are important in explaining the observed
variation in Heat. From the coefficient column, the regression equation is found to
be:
Heat = 62.4 + 1.55×CHEM1 + 0.510×CHEM2 + 0.102×CHEM3 !
0.144×CHEM4

However, the t tests suggest that some of the coefficients aren’t significantly dif-

170 Statistix 10 User’s Manual


ferent from zero. The high VIF’s warn us that collinearity among the independent
variables is a problem. Selecting the best model is discussed on page 184.

Regression Once the regression analysis is computed and displayed, the Results menu icon,
Results Menu which appears as a shovel (think “dig-deeper”), on the toolbar is enabled. Click on
the icon to display the regression results menu shown below.

Select Coefficient Table from the menu to redisplay the regression coefficient
table. The remaining options are described on the following pages.

Comparison of Frequently, the relationship between two variables X and Y is studied in samples
Regression obtained by different investigators, or in different environments, or at different
Lines times. In summarizing the results, the question naturally arises: Can the regression
lines be regarded as the same?

This procedure compares simple regression lines for two or more groups. It tests
for equality of variance, slope, and elevation. This test can only be used for simple
regression, that is, when there’s only one independent variable.

First, run the Linear Regression procedure. Then select the Comparison of
Regression Lines from the Results menu and select a Group Variable identifying
two or more groups. The group variable can be any data type (real, integer, date,
or string) and can have up to 500 levels.

Chapter 8, Linear Models 171


As an example, consider the cholesterol and age data from Chapter 4. In a survey
of women from two states, cholesterol concentration and age data were collected.
We first perform linear regression of cholesterol concentration on age and find a
linear relationship. We now want to determine if the regression lines are the same
for subjects from the two states the data were collected: Iowa and Nebraska. We
select the variable State as shown in the dialog box above. The results are shown
below.

Comparison of Regression Lines for Conc = Age

State N Intercept Slope MSE


Iowa 11 35.8112 3.23814 2391.22
Nebraska 19 101.298 2.52044 1580.82

F DF P
Equality of Variances 1.51 9, 17 0.2210
Comparison of Slopes 0.38 1, 26 0.5425
Comparison of Elevations 3.00 1, 27 0.0947

The statistics for the individual regression lines are listed first. Below that are
three F tests: for equality of variance, slopes, and elevations. We first compare the
variances for the two regression lines using Bartlett’s test. The p-value of 0.2210
allows us to conclude that there are no real differences between the variances.
Assuming homogeneity of residual variances, we now compare the two slopes,
3.24 for Iowa, and 2.52 for Nebraska. The p-value of 0.5425 does not suggest that
the slopes are different. Assuming parallel lines and homogeneous variance, we
now consider the test for comparison of elevations. The p-value of 0.0947 is not
quite significant. We conclude that the regression lines are the same for the two
states.

You can use the Scatter Plot procedure discussed in Chapter 4 to visually compare
regression lines using a grouping variable. See Snedecor and Cochran (1980) for
details of these tests.

Durbin- This option computes the Durbin-Watson test for autocorrelation for a particular
Watson Test regression model. In the example below, the regression model was specified as
Heat = Chem2 Chem3. The results are displayed below.

Durbin-Watson Test for Autocorrelation

Durbin-Watson Statistic 1.7498

P-Values, using Durbin-Watson's Beta Approximation


P (positive corr) 0.2846
P (negative corr) 0.7154

Expected Value of Durbin-Watson Statistic 2.0359


Exact Variance of Durbin-Watson Statistic 0.23450

Cases Included 13 Missing Cases 0

172 Statistix 10 User’s Manual


The Durbin-Watson statistic and approximate observed significance levels
(p-values) are displayed. Use the Durbin-Watson statistic to test whether the
random errors about the regression line exhibit autocorrelation. In our example,
there’s little suggestion of either positive (p=0.2846) or negative (p=0.7154)
autocorrelation.

Most tests in regression are based on the assumption that the random errors are
independent. Violation of this assumption due to autocorrelation can invalidate the
results of these tests. Another reason to check for autocorrelation is because it may
suggest that you need additional independent variables in the model.

Autocorrelation can occur when the data have some natural sequence, i.e., the
observations are ordered in time or space. You should always consider the
possibility of autocorrelation in trend data, such as price or population levels over
time. Positive autocorrelation results when large positive errors tend to be
followed by large positive errors and large negative errors tend to be followed by
large negative errors. Negative autocorrelation is less common and results when
large errors tend to be followed by large errors with the opposite sign. Chatterjee
and Price (1991) give more detail and some examples of the application of the
Durbin-Watson test.

If there’s neither positive nor negative autocorrelation, the Durbin-Watson statistic


will be close to 2. A value close to 0 suggests positive autocorrelation, and a value
close to 4 suggests negative autocorrelation. The observed significance levels (p-
values) are calculated with the beta distribution approximation suggested by
Durbin and Watson (1971). Their results indicated that this approximation usually
works well even for relatively small sample sizes. (Statistix won’t compute the test
for samples with fewer than ten cases). The procedures to calculate the
significance level, the expected value, and the variance are described in Durbin and
Watson (1951). The beta approximation can’t be used when the variance of the
Durbin-Watson statistic is large, in which case, the p-values aren’t computed.

The Runs Test and Shapiro-Wilk Test are also useful for examining whether the
test assumptions in regression have been violated.

Prediction This option computes the predicted or fitted value of the dependent variable in a
regression for values of the independent variable(s) you specify.

The values for the independent variables can be indicated in two ways. One is
simply to enter the list of desired values. The other is to enter a case number that
contains the desired values for the independent variables. You choose a method by
selecting one of the Specification Method radio buttons.

The Value Method is illustrated in the dialog box on the next page. The values 8
and 80 are entered in the Predictor Values box for the variables Chem1 and
Chem3. The default value of 1.0 is used for the weight, so the resulting prediction

Chapter 8, Linear Models 173


intervals are for a single future observation. If you specify a weight w, the
prediction interval is for the mean of w future observations. You can also specify
the percent coverage for confidence intervals for the fitted value. Enter a value in
the C.I. Percent Coverage box.

The results for the example are displayed below.

Predicted/Fitted Values of Heat

Lower Predicted Bound -6.9581 Lower Fitted Bound -4.7225


Predicted Value 130.41 Fitted Value 130.41
Upper Predicted Bound 267.77 Upper Fitted Bound 265.53
SE (Predicted Value) 61.650 SE (Fitted Value) 60.646

Unusualness (Leverage) 29.9737


Percent Coverage 95
Corresponding T 2.23

Predictor Values: Chem1=8.0000, Chem3=80.000

The predicted value is 130.41 with a standard error of 61.650. The 95% prediction
interval is !6.9581 to 267.77. This interval is expected to contain the value of a
future single observation of the dependent variable Heat 95% of the time, given
that Heat is observed at the independent variable values Chem1 = 8, Chem3 = 80.

The unusualness value tells you how “close” the specified independent data point
is to the rest of the data. If the point isn’t close to the rest of the data, then you’re
extrapolating beyond your data—and prediction abilities may be very poor. An
unusualness value greater than 1.0 should be considered large, so the prediction
results in the example above are clearly suspect.

Unlike most other procedures, the prediction option lets you compute statistics for

174 Statistix 10 User’s Manual


omitted cases. So you can divide the data into two subsets by omitting some cases.
The regression model will be fitted using the selected cases, and you can use this
procedure to see how well the model fit cases that weren’t used to fit the model.
Points used to fit the regression shouldn’t be used to validate the regression model
because least squares strives to optimize the goodness-of-fit at these points, so
these points give an optimistic impression of the regression’s performance.

For weighted regression models, the value for the weight variable at the case
specified will be used as the weight in the prediction interval calculations. A
weight w causes the prediction interval to be computed for a mean of w future
observations.

See the Save Residuals procedure discussed on page 176 for an alternative method
of obtaining fitted values for cases in your data file.

Plots Statistix offers a fitted line plot and four residual plots directly from the Results
menu. You can also save fitted values and residuals (page 176) and use the Scatter
Plot procedure described in Chapter 4 to graph additional residual plots.

The Fitted Line plot is only available when there’s one independent variable in the
model. You can see an example of this on the next page. The dialog box includes
options to plot the observed values and vertical bars representing confidence
intervals.

The Std Resids by Fitted Values plot is a scatter plot with the standardized
residuals on the Y axis and the fitted values on the X axis. You use this plot to
check the assumption of constant variances.

Chapter 8, Linear Models 175


The Std Resids Time Series plot is a scatter plot with case numbers on the X axis.
You’ll find this plot useful when the data are sequentially ordered.

The Normal Probability Plot is used to create a rankit plot of the standardized
residuals to test the assumption of normality. The Shapiro-Wilk statistic for
normality is also given.

Save Statistix can do extensive residual analysis, which is very important for model
Residuals evaluation. Systematic patterns in the residuals can suggest transformations or
additional variables that would improve the model. Residuals are important for
“outlier” analysis and to identify unusual observations. You can also use residuals
to find those observations that are most influential in determining the values of the
regression coefficient estimates. Space doesn’t permit a detailed discussion here;
if you’re interested, please consult Weisberg (1985).

Use the tab and shift-tab keys to move the cursor to the residual name you’re
interested in. Then type the name of a new or existing variable. You can click on
the down-arrow button to display the list of current variables. You can also change
the value for percent coverage for predicted and confidence intervals by entering a
new value in the C.I. Percent Coverage box.

When you’ve finished entering variable names, press the OK button to start the
computations. The results aren’t displayed. The residuals are stored as variables
and can be examined using procedures such as Scatter Plot, Shapiro-Wilk Test,
Print, and Runs Test. Each of the residual options is described in more detail
below.

The Fitted Value option saves the fitted (predicted) values for the dependent
variable in a new variable. You can plot these values against the dependent

176 Statistix 10 User’s Manual


variable to see how well the model fits the observed values.

The Residual option saves the raw residuals. You use these to examine how well
the model fits the data. Systematic trends indicate that data transformations may
be needed or that important independent variables weren’t included in the model.
Large residuals may indicate observations that are “outliers”. The raw residual is
computed as the observed value of dependent variable minus the fitted value.

The Leverage option saves the leverage values in a new variable. Leverage
measures how influential a point is in determining the regression coefficients.
Depending on the values of the independent variables, some data points are much
more influential in determining the values of the regression coefficients than
others. In general, the further the values of the independent variables are from
their averages, the more influential these data points are in determining the
regression coefficients. Points with “unusual” independent variable values have
high leverage. If you’re familiar with the matrix representation of multiple
regression, leverage is calculated as the diagonal elements of the X(XTX)!1X
matrix (assuming no weights were used).

The Standardized Residual option saves the standardized residuals in a new


variable. Raw residuals are very popular for examining model fit, although they do
possess at least one potential problem. Data points located at “unusual” values of
the independent variables will have high leverage, which means that they tend to
“attract” the regression line and seldom have large residuals. Standardized
residuals adjust for this problem to some extent by standardizing each raw residual
with its standard error (also known as “studentizing”). The standard error of a
residual is computed as the square root of the quantity MSE((1 ! LEVERAGE),
where MSE is the residual mean square for error.

The Distance option computes Cook’s distance measure (Cook 1977, Weisberg
1985) and saves it in a new variable. It’s very useful for finding influential data
points, and it considers the effects of both the dependent and independent variables
(LEVERAGE considers the independent variables only). Cook’s distance for an
observation can be thought of as a measure of how far the regression coefficients
would shift if that data point was eliminated from the data set. Remember that it’s
a distance measure and not a test statistic. Usually, Cook’s distance by itself isn’t
very useful because it’s a function of sample size and the number of variables in
the regression model. The next option, P (Distance), which “standardizes” Cook’s
D, is generally more useful.

P (Distance) calculates the “pseudo-significance level” of the confidence bound


associated with the Cook’s distance and saves it in a new variable. You can think
of Cook’s distance for an observation as a measure of how far the regression
coefficients would shift if that data point were eliminated from the data set. It
seems reasonable to standardize this distance by examining what level confidence
bound the shifted regression coefficients would fall on. For example, if
eliminating the I-th data point causes the new coefficients to shift to a position that

Chapter 8, Linear Models 177


corresponds to the edge of a 90% confidence region around the original estimates,
this may be cause for concern. Again, Cook’s distance isn’t a test statistic but
rather a distance measure, and you shouldn’t think of P (Distance) as an observed
significance level (hence the term “pseudo-significance”) but as a standardized
distance.

The Outlier option computes the t-statistic for the outlier test and saves it in a new
variable. You may suspect an observed value of the dependent data that deviates
substantially from the predicted value of being an outlier. This is usually tested
with a t-statistic, which is the result of this option. The computations follow Cook
(1977).

P (Outlier) computes the p-value for the t-statistics resulting from the outlier test
and saves it in a new variable. The computed p-value is appropriate for a priori
tests for a single outlier. In other words, it’s the appropriate observed significance
level if you were interested in testing one particular case that’s suspected of being
an outlier before the data was observed. This is a rather unusual circumstance;
more often, you notice potential outliers only after inspection of the residuals.

If you suspect a case of being an outlier after inspecting the data, you must give it
special consideration. Suppose there were n cases. You now need to decide
whether the value of the observed t-statistic is “unusual” given that it’s the “most
unusual” out of n repeated t tests. If the a priori p-values are used, then too many
cases will be detected as “outliers”, so you need to somehow “inflate” the a priori
p-values. This inflation can be easily performed with Bonferroni’s inequality
(Weisberg 1985).

For example, suppose there were n cases inspected for outliers. Then,
Bonferroni’s equality says that the actual observed significance level will be no
greater than nP, where P is the p-value returned by P (Outlier). In practice, the
observed significance level may be substantially less than nP; Bonferroni’s
procedure has an undesirable property of becoming too conservative as n increases.

As noted earlier, you can compute Outlier and P (Outlier) for cases that were
omitted and not used in the regression. Suppose m cases are used in the regression
and m' are omitted. If you’re interested in inspecting all m + m' cases for outliers,
then you should use n = m + m'. If you’re interested in just the omitted cases, then
n = m'. The choice of n is your responsibility; Statistix always gives you the a
priori p-value.

The Confidence Interval L. L. and Confidence Interval U. L. options compute


the lower and upper limits of the confidence interval for the fitted line. Use the
C.I. Percent Coverage box to change the coverage level.

The Prediction Interval L. L. and Prediction Interval L. L. options compute the


lower and upper limits for predicted values for future observations. If the original
regression model didn’t include a weighting variable, the results that are saved are

178 Statistix 10 User’s Manual


the limits of the prediction intervals for single future observations. Otherwise, the
results are the limits of prediction intervals for means of w future observations,
where w is the value of the weight variable.

Treatment of Statistix returns values for certain residual menu options for cases that aren’t used
Omitted for estimation. Values are computed for predicted value, residual, leverage,
Cases and outlier, and P (outlier) for omitted cases. A case that’s not omitted may still not
Missing Y have been used for estimation if some of the values for that case are missing. If the
Values only missing value for a case is the dependent variable, you can obtain values for
predicted value and leverage.

These statistics derived from cases not used for estimation are especially useful for
model validation. For omitted cases, predicted values are just what their name
implies. For omitted cases, the residual option returns predicted residuals.
Predicted residuals are, as you’d expect, the difference between the observed
dependent variable value and predicted value (eq. 2.2.22 of Cook and Weisberg
1982). For cases that aren’t used in the regression, leverage is somewhat
unfortunate terminology. “Unusualness” or “generalized distance” would be a
better term. For cases not used for estimation, this statistic measures how unusual
the independent variable values of a case are relative to the values for cases used
for estimation (see Weisberg 1985).

The interpretation of outlier is interesting and important. For cases used for
estimation, outlier returns the t-statistic for the null hypothesis that the particular
case conforms to the model fitted to the rest of the cases. An intuitive approach to
constructing such a test is to first perform the regression without the particular
case. From this regression, you can compute the predicted residual for the
particular case, say e. It’s fairly easy to see that the standard error of e is simply
the standard error of prediction, and the resulting statistic e/SE(e) should follow a
t-distribution under the null hypothesis that E[e] = 0. This leads to an alternative
interpretation of outlier as a predicted Studentized residual. Since the standard
error used for Studentizing was derived from a data set that didn’t include the
particular point of interest, Cook and Weisberg (1982) refer to this as external
Studentizing. Note that for the m cases used for estimation, the corresponding
outlier values are computed from the m different data sets that did not include the
case of interest. (The usual standardized residual is internally Studentized, which
means the set of cases used to derive the standard error included the case for the
residual.)

If there were m cases used in the original regression, it would appear that m
separate regressions are needed to compute outlier just for the points used for
estimation. However, as Cook and Weisberg (1982) show, there are some
remarkable identities that permit the calculation of these statistics from quantities
obtained from only the one regression using the m points. If the regression
contains p parameters, the outlier statistic for a case used for estimation has m ! p
! 1 degrees of freedom associated with it.

Chapter 8, Linear Models 179


This now suggests how to handle cases not included in the regression. As before,
the statistic is e/SE(e), where e is the predicted residual and SE(e) is the standard
error for prediction. (The calculation of the standard error of prediction is
described under Computational Notes.) For all cases not in the set of m cases used
for estimation, outlier uses the same regression based on the m cases for the base
comparison. (This is in contrast to the situation for Outlier for cases that were
used in estimation; each such Outlier uses a different data set of m ! 1 cases for
base comparison.) This gains an extra case for computing the comparison
regression for cases not used for estimation, so the Outlier statistic for a case not
used for estimation has m ! p degrees of freedom associated with it.

For cases not used for estimation, the usual standardized (internally Studentized)
residuals can’t be computed by definition. However, when you interpret them as
predicted Studentized residuals, you can use outlier statistics for most of the
purposes for which you’d want standardized residuals.

Sensitivity Linear regression requires the assumption that the independent variables are
measured without error. The Sensitivity procedure determines how sensitive the
estimated regression coefficients are to violations of this assumption. You need to
specify the expected size of the errors associated with each of the independent
variables, which can be done in one of two ways.

If you know the standard deviations of the errors, you can enter them directly using
the Standard Deviations method. For example, if the values for an independent
variable are the readings from some instrument, the standard error of the measures
may be known from calibration results. However, precise information about the
errors often won’t be available. Sometimes the best you can do is to say, “I trust
the figures down to—but not including— the d-th digit, which may be off by plus
or minus 1". Often d will point to the digit at which rounding error occurred. In
this case, you specify the position of the leftmost untrustworthy digit relative to the
decimal point. For example, d = 2 for 9340.0 means you think the 4 may be in
error. A way to model this is to assume that an additive random error is associated
with 9340.0 and is uniformly distributed on the interval !5 to 5, which is exactly

180 Statistix 10 User’s Manual


what Statistix does when treating ROUNDING error sensitivity. Negative values
of d are used to indicate digits right of the decimal, for example, d = !4 points to
the 8 in 0.03480. Values for d must be in the set of nonzero integers from 21 to
!20.

We’ll use the Hald data again to illustrate the procedure. Assume the regression
was Y = Chem1 Chem2 Chem3 Chem4. Assume the figures are probably accurate
except perhaps for rounding error in the least significant digit. Statistix assumes
that this rounding error is uniformly distributed in the interval !0.5 to 0.5. The
options are specified in the dialog on the preceding page. The results are presented
below.

Coefficients' Sensitivities to Errors in Independent Variables


Variable in which Error Occurs
Chem1 Chem2 Chem3 Chem4
Error SD 0.289 0.289 0.289 0.289
Constant 0.469 0.554 0.558 0.575
Chem1 0.829 0.943 0.923 0.962
Chem2 0.372 0.452 0.463 0.474
Chem3 -0.355 -0.250 -0.271 -0.230
Chem4 -0.169 -8.75E-02 -7.89E-02 -6.74E-02

What’s the interpretation of these results? When the independent variables


represent continuous quantities, they’ll always have some error associated with
them. The question is whether the errors are large enough to seriously influence
the results.

Suppose the independent variables were “perturbed” with errors of the specified
size and the regression analysis rerun. We’d be happy if the slope estimates after
perturbation were similar to those from before. The sensitivity coefficients
indicate just how similar the new coefficients after perturbation are expected to be
to the previous ones. More specifically, sensitivity coefficients give an index of
how many significant figures will agree with the coefficients. For example, the
estimates 0.89768 and 0.89735 agree to 3 figures, and this degree of agreement is
expected when the sensitivity coefficient is near 3. Small sensitivity coefficients
are undesirable. Of course “small” is subjective, but if the values are near or less
than 1, such as in the example, the analysis clearly is very sensitive to errors in the
independent variables and the regression coefficients can’t be trusted.

It’s interesting to compare these sensitivities with those for the model Heat =
Chem1 Chem4:

Coefficients' Sensitivities to Errors in Independent Variables

Variable in which Error Occurs


Chem1 Chem4
Error SD 0.289 0.289
Constant 2.496 2.866
Chem1 1.816 2.205
Chem4 1.918 2.276

Chapter 8, Linear Models 181


The sensitivity coefficients have increased considerably, which means the slope
coefficients are now substantially less sensitive to the errors. This makes an
interesting and very important point about this data set. Chem1, Chem2, Chem3,
and Chem4 are nearly collinear; for all cases, the sums of these variables are
between 95 and 98. When the data are nearly collinear, the slope coefficients
become very sensitive to minor changes in the independent variables. Many hours
have been wasted by researchers trying to divine the interpretation of slope
coefficients that were artifacts of the interaction of near-collinearity and errors in
measurement. Reducing the degree of correlation in the set of independent
variables has substantially reduced the sensitivities to the influence of roundoff
errors.

If you’re not certain how trustworthy your independent variables are, try some
“worst-case scenarios”. In other words, use large potential errors.

The sensitivity calculations are described in Weisberg (1982, 1985). We display


the negative log (base ten) of the relative sensitivity coefficient, which in
Weisberg’s notation is !log10 (gjk). When you specify a placeholder d, the standard
deviation of the error is calculated by assuming the error is a uniform random
variable on the interval centered at 0 of width 10(n ! sgn(n)) (sgn(n)=1 if n>0, sgn(n)=0
if n<0). The standard deviation is then 10(n!sgn(n))/121/2. If the absolute value of an
estimated coefficient is too small (<1.0E!06) for reliable calculations, an M is
displayed.

Stepwise AOV This option produces a stepwise analysis of variance table for the specified
Table regression model. The row order of the stepwise table reflects the order in which
the independent variables are specified in the model. For the Hald example, the
results are presented on the next page.

The table lists the individual contribution to the sums of squares, the cumulative
sums of squares, cumulative mean squares, cumulative adjusted R2, and Mallows’
Cp statistic (see Miscellaneous Regression on the next page).

Stepwise Analysis of Variance of Heat

Individual Cum Cumulative Cumulative Adjusted Mallows


Source SS DF SS MS R² CP P
Constant 118372
Chem1 1450.08 1 1450.08 1450.08 0.4916 202.5 2
Chem2 1207.78 2 2657.86 1328.93 0.9744 2.7 3
Chem3 9.79 3 2667.65 889.22 0.9764 3.0 4
Chem4 0.25 4 2667.90 666.97 0.9736 5.0 5
Residual 47.86 12 2715.76 2715.76

R² 0.9824 Mean Square Error (MSE) 5.98295


Adjusted R² 0.9736 Standard Deviation 2.44601
AICc 42.944
PRESS 110.35

This table is useful for testing the contribution of subsets of the independent
variables to the overall model. For example, you’re interested in testing whether
Chem3 and Chem4 add anything to the model once Chem1 and Chem2 are already
included. We first find the difference between the cumulative sums of squares for

182 Statistix 10 User’s Manual


the model with all independent variables in it and one with just Chem1 and
Chem2: 2667.90 ! 2657.86 = 10.04. Because we’re testing the contribution of two
parameters, this sum of squares has two degrees of freedom associated with it and
the resulting F test is F = (10.04/2)/5.983 = 0.839 (5.983 is the residual mean
square of the full model). An F statistic this small suggests that Chem3 and
Chem4 contribute little to the model once Chem1 and Chem2 are already included.
However, this test says little about whether the model with just Chem1 and Chem2
is a “good” model.

Variance- Select this option to obtain the variance-covariance matrix of the regression
Covariance of coefficient estimates.
Betas
Variance-Covariance Matrix of Coefficients

Constant Chem1 Chem4


Constant 4.51131
Chem1 -0.19254 0.01916
Chem4 -0.08332 1.65E-03 2.37E-03

The diagonal elements of the matrix are the variances of the regression
coefficients. The off-diagonal values are the covariances; for example, 0.00165 is
the covariance of the coefficient estimates of Chem1 and Chem4.

This matrix is most commonly used for constructing confidence regions about
coefficient estimates, and for testing hypotheses about various functions of the
coefficient estimates. More detail on these topics can be found in Weisberg
(1985).

Miscellaneous Mallows’ Cp statistic, R2, adjusted R2, and AICC are important criteria for
Regression evaluating and comparing regression models.
Topics
The Mallows’ Cp statistic is useful for model selection. It’s discussed in detail by
Daniel and Wood (1971), Snedecor and Cochran (1980, p. 359) and Weisberg
(1985). The Cp statistic is based on the fact that not including an important
independent variable in the model results in the fitted response values being
biased. Cp gives an index of this bias. “Good” models have Cp values near to or
less than p, where p is the number of parameters in the model. (Negative values
will occasionally be observed). This statistic is most useful for eliminating
variables that contribute little to the model. However, it tells you nothing about
whether you started with all of the correct independent variables in the first place.

AICC is the small-sample version of Akaike’s Information Criterion (AIC), which


is useful for comparing models. AICC is appropriate for small or moderately-sized
data sets (n/p < 40), and is similar to AIC for large data sets. The model with the
lowest value among candidate models is the best (although, not necessarily a good
model). The absolute size of AICC for a particular model has no meaning itself,
rather it’s the relative values that are important. A useful quantity is the difference

Chapter 8, Linear Models 183


of AICC for a candidate model from the model with the lowest value. The larger
the difference, the less plausible it is that the candidate model is the best. Consider
these rough rules of thumb:
AICC(I) ! AICC(min) Support of Model I
0!2 Substantial
4!7 Considerably less
> 10 Essentially none
The Best Subset Regressions procedure (page 187) reports this difference. See
Burnham et al. ( 2002) for more about Akaike’s Information Criterion.

The R2 and adjusted R2 statistics measure the goodness of fit of a regression


model. R2 measures the proportion of variance in the dependent data explained by
the regression. It’s computed as 1 ! RSS/SST, where RSS is the residual sum of
squares and SST is the total sum of squares. A potential problem with R2 is that it
always increases as new independent variables are included in the model (RSS
always decreases), even if they don’t possess any relationship with the dependent
variable. Adjusted R2 is adjusted for the number of independent variables in the
model to correct for this problem, and, therefore, will often be more interesting
than the unadjusted R2. Adjusted R2 is computed as 1 - ((n - 1) / (n - p)) × (1 - R2),
where n is the number of cases and p is the number of parameters in the regression.
Adjusted R2 is a monotonic function of the residual mean square. Unlike the
unadjusted R2, negative adjusted R2's will occasionally be observed. Chatterjee
and Price (1991), Draper and Smith (1966), and Weisberg (1985) are good
references for more detail on these statistics.

R2 and adjusted R2 are useful for comparing models, but you must not use them to
compare a model with a constant fitted to a model forced through the origin. The
total sum of squares for the model without the constant isn’t adjusted for the mean,
which inflates it relative to the model with the constant fitted. The result is that R2
and adjusted R2 are usually larger for the no constant model. The Centered R2,
reported for the Linear Regression procedure for no-constant models, is computed
using the centered sums of squares (the total sums of squares of the dependent
variable adjusted for the mean). The Centered R2 for a no-constant model can be
compared to the R2 of a constant fitted model of the same dependent variable.

Best Model When there are a moderate number of independent variables, Best Subsets
Selection Regressions is a good way to select the best model. However, the number of
subsets grows rapidly as the number of independent variables increases. If there
are too many independent variables, use the Stepwise Linear Regression
procedure. Draper and Smith (1966) present a good description of popular
stepwise procedures. The potential problem with stepwise procedures is they don’t
necessarily result in a model that’s best when judged by adjusted R2 , Mallows’ Cp,
or AICC. It’s generally a good idea to try at least two stepwise procedures, such as
backward elimination and forward inclusion, to see if they result in the same
model.

184 Statistix 10 User’s Manual


Model selection is somewhat of a craft. Regression analysis is usually performed
(1) to explore possible cause-effect relations, (2) to develop some predictive
relationship, or (3) some combination of these. The relative importance of each of
these goals may have some bearing on the model selection strategy used. Cause-
effect modeling focuses on determining the “important” independent variables.
Predictive modeling focuses more on the development of a good predictor of the
dependent variable than on the contribution due to any particular independent
variable. Obviously, the distinction between cause-effect analysis and predictive
modeling isn’t a sharp one and most analyses include components of both. (As an
aside, you should realize that regression analysis can’t actually establish cause-
effect relationships. It can examine the nature and extent of association between
the dependent and potential independent variables. The interpretation of these
associations as cause-effect is outside the realm of statistics, and lies in the domain
of the appropriate subject-matter field.)

In addition to F tests, there are several other popular statistics for evaluating the
“goodness” of a regression model. Some of these are adjusted R2, Mallows’ Cp
statistic, AICC, R2 , RSS (residual sum of squares) and RMS (residual mean
square). Actually, R2 and RSS are equivalent in the sense that they will produce
the same orderings of the models. Adjusted R2 and RMS are equivalent to one
another in the same sense. It’s generally best to use the adjusted R2, Mallows’ Cp,
and AICC for model selection. R2 is useful for comparing models with the same
number of independent variables in them, but the adjusted R2 will produce the
same ordering of the models. The advantages of adjusted R2 over the unadjusted
R2 are discussed in the previous section. There are lots of good references on the
use of adjusted R2 and Mallows’ Cp in model selection, such as Weisberg (1985),
Chatterjee and Price (1991), Daniel and Wood (1971), and Snedecor and Cochran
(1980, p. 358).

The job isn’t over when you’ve found the best model, as indicated by the adjusted
R2 or Cp statistic. Particularly in the case of cause-effect modeling, it will be of
interest to examine whether all of the independent variables are significant (use the
p-values from the coefficient table). No analysis is complete without an
examination of the residuals. As a minimum, do the standardized residuals show
any trend when plotted against the dependent variable? If they do, the model is not
adequate.

The Normal Probability Plot is valuable for examining whether the assumption of
normally distributed errors has been violated. If the errors aren’t normally
distributed, the significance tests may be invalid. It’s also a good idea to look at
the correlations among the independent variables. High correlations may suggest
problems with collinearity. If this is the case, the Eigenvalues-Principal
Components procedure may be useful.

Computational The core computations are performed using Gentleman’s square root free
Notes modification of Givens’ method (Seber 1977). This is one of the most accurate

Chapter 8, Linear Models 185


methods available. An interesting feature of the method is that it exploits
sparseness (zeros) in the independent variables to reduce computation time. It’s
therefore well suited for performing analyses of (co)variance that involve indicator
variables.

The error variance is estimated as σ*2 = RMS, where RMS is the residual mean
square from the regression. Let X be the design matrix containing all the cases
used to find the coefficient estimates B*. The estimated variance of B* is V(B*)=
σ*2 (XTUX)!1, where U is the weight matrix if weights were used, and U = I
otherwise. Suppose xT is a specific row of X, a case used for estimation. The
fitted value f corresponding to case x is f = xTB*. The estimated variance of a
fitted value is then V(f) = σ*2 xT(XTUX)!1x.

Now assume x wasn’t used for estimation. The corresponding predicted value is p
= xTB*. If a weight isn’t specified, the prediction is for a single future observation,
which has estimated variance V(p) = σ*2 + V(f). If there is a weight w, then the
prediction is for a mean of w future observations, which has estimated variance
V(p) = σ*2/w + V(f). Let SE(f) and SE(p) be the square roots of V(f) and V(p),
respectively. Then, the confidence interval for the fitted value f and the prediction
interval for the predicted value p are given respectively as f ± SE(f) t and p ± SE(p)
t, where t is the appropriate t value for the specified coverage.

If x is used for estimation, leverage is computed as u xT(XTUX)!1x, where u is the


scalar weight associated with case x. If x wasn’t used for estimation, the
“leverage” (unusualness) is xT(XTUX)!1x. Let y be an observed value of the
dependent variable (it may or may not have been used for estimation). A residual
is computed as y!f, and a predicted residual is computed as y!p. An outlier t
value for a case not used for estimation is (y!p) / SE(p).

186 Statistix 10 User’s Manual


Best Subset Regressions
The Best Subset Regressions procedure in Statistix computes the best subset
regression models given a full model that contains all the potential predictor
variables of interest. A specified number of subset models with the highest
adjusted R2 are listed for each model size.

Specification

First select the name of the dependent variable (response variable). Highlight a
variable in the Variables box, then press the right-arrow button next to the
Dependent Variable box to move the highlighted variable into that box.

Move candidate independent variables to the Non-forced Indep. Variables list


box. These variables will be used in all possible combinations to form the subset
regressions.

You can select one or more variables to be forced in the regression models. Forced
variables appear in all of the subset models. Listing some independent variables
known to be important can greatly reduce the number of subset models computed.
Move the variables you want forced in all models from the Variables list to the
Forced Indep. Variables list.

To perform weighted least squares regression, select the name of the variable
containing the weights and move it to the Weight Variable box.

Use the Fit Constant check box to specify a model with a constant fitted (checked)
or a model forced through the origin (not checked).

Enter the number of best candidate models you want listed in the results for each

Chapter 8, Linear Models 187


subset model size in the Models/Model Size edit control. You can specify as many
as ten models, but this will be reduced for very large full models to limit the total
number of subset models listed to 150.

Data Up to a total of 200 forced and unforced independent variables can be included in
Restrictions the model (15 is a more practical limit for the number of unforced variables
because of computation time). If any values are missing for a case in the full
model, the entire case is ignored (listwise deletion) for all models. If an
independent variable is too highly correlated with a linear combination of other
independent variables in the full model (collinearity), it’s dropped from the model.
Computation is reinitiated with a new full model in which the offending
independent variable has been dropped. Variables are dropped until such
collinearity has been eliminated and reliable computations can proceed. If
weighted regression is specified, the variable used for the weights cannot have
negative values. Zero weights are treated as missing values.

Example Our example data are the Hald data from Draper and Smith (1966). The variable
Heat is the cumulative heat of hardening for cement after 180 days. The variables
Chem1, Chem2, Chem3, and Chem4 are the percentages of four chemical
compounds measured in batches of cement. The data are listed on page 165 and
are available in the file Sample Data\Hald.sx. The goal is to relate the heat of
hardening to the chemical composition. The analysis is specified in the dialog box
on the preceding page. The results are displayed below.

Best Subset Regression Models for Heat

Unforced Independent Variables: (A)Chem1 (B)Chem2 (C)Chem3 (D)Chem4


3 "best" models from each subset size listed.

Adjusted AICc -
P Cp R Square Min AICc w(AICc) Resid SS Model Variables
1 442.9 0.0000 42.22 0.0000 2715.76 Intercept Only
2 138.7 0.6450 31.10 0.0000 883.867 D
2 142.5 0.6359 31.42 0.0000 906.336 B
2 202.5 0.4916 35.77 0.0000 1265.69 A
3 2.7 0.9744 0.00 0.5698 57.9045 A B
3 5.5 0.9670 3.32 0.1083 74.7621 A D
3 22.4 0.9223 14.43 0.0004 175.738 C D
4 3.0 0.9764 3.13 0.1194 47.9727 A B D
4 3.0 0.9764 3.16 0.1172 48.1106 A B C
4 3.5 0.9750 3.88 0.0819 50.8361 A C D
5 5.0 0.9736 10.52 0.0030 47.8636 A B C D

Cases Included 13 Missing Cases 0

Mallows’ Cp statistic, adjusted R2, AICC, and residual sums of squares are
produced for each model. More detail on these statistics is given in Linear
Regression under Miscellaneous Regression Topics. The Akaike weights, labeled
w(AICc) in the report, are transformed AICc values which can be directly
interpreted as conditional probabilities for each model (Wagenmakers and Farrell,
2004). Both Cp and AICC identify the model with variables Chem1 and Chem2 as

188 Statistix 10 User’s Manual


the best of the candidate models.

Note: The number of possible subset models grows rapidly as the number of
independent variables is increased. If there are M independent variables, then
there are 2M ! 1 subset models. For example, with 10 independent variables there
are 1023 subset models, while with 15 independent variables there are 32,767
subset models.

Computational The method used to generate subset statistics is patterned after Clarke (1981). An
Notes advantage of this method is that it’s very accurate without requiring extended
precision computing. The first regression based on the full model is performed
with the method described in Linear Regression.

Stepwise Linear Regression


The Stepwise Linear Regression procedure in Statistix performs stepwise linear
regression. You can specify an empty initial model (forward selection), a full
initial model (backward elimination), or any initial model in between. Stepwise
procedures are popular methods of searching for good subset models, particularly
when the number of independent models is large. (See also Best Subset
Regressions on page 187.)

Specification

Chapter 8, Linear Models 189


First select the name of the dependent variable (response variable). Highlight a
variable in the Variables box, then press the right-arrow button next to the
Dependent Variable box to move the highlighted variable into that box.

Divide your independent variables between the list boxes for Forced, Starting, and
Non-forced Indep. Vars. Forced variables appear in all steps of the stepwise
procedure and won’t be eliminated regardless of the elimination criteria. The
starting independent variables appear in the initial model and may be eliminated in
subsequent steps. The non-forced independent variables don’t appear in the initial
model but will be considered for selection.

For forward selection, move all your independent variables to the Non-forced
Indep. Vars box. For backwards selection, move all the independent variables to
the Starting Indep. Vars box. You can also use the Starting Indep. Vars box to
enter an initial model that you’ve previously found of interest or an initial model
that includes a variable that was overlooked in a previous stepwise regression.

A stepwise regression builds a regression model by repeating a process that adds


and deletes variables from a list of candidate variables. The stepwise process stops
when no variables not already in the model meet the selection criterion and no
variables in the model meet the elimination criterion.

At each step in the process, the variable with the lowest p-value is selected to enter
the model next. A variable won’t be selected unless its p-value is less than the
value you enter for the P To Enter criterion. A variable’s p-value tests the
hypothesis that the variables’ regression coefficient is zero. You can specify pure
backward elimination by entering 0.0 for the P To Enter criterion to prevent
eliminated variables from reentering the model.

The variable with the highest p-value is eliminated from the model at each step. A
variable won’t be eliminated unless its p-value is greater than the value you enter
for the P To Exit criterion. You can specify pure forward selection by entering 1.0
for the P To Exit criterion to prevent selected variables from being eliminated
later.

The remaining options let you choose between brief and full report formats, select
a weight variable for weighted regression, and specify a model forced through the
origin.

Data Up to 200 independent variables can be included in the model. If any values are
Restrictions missing for a case in the full model, the entire case is ignored (listwise deletion)
for all models. If weighted regression is specified, the variable used for the
weights can’t have negative values. Zero weights are treated as missing values.
Variables won’t be selected that are found to be too highly correlated with
variables already in the model (collinearity).

190 Statistix 10 User’s Manual


Example The data used in our example is the Hald data set from Draper and Smith (1966).
The variable Heat is the cumulative heat of hardening for cement after 180 days.
The variables Chem1, Chem2, Chem3, and Chem4 are the percentages of four
chemical compounds measured in batches of cement. The data are listed on page
165 and are available in the file Sample Data\Hald.sx.

The goal is to relate the heat of hardening to the chemical composition. The
analysis is specified in the dialog box on page 189. The results are displayed
below.

Stepwise Linear Regression of Heat


Unforced Variables: Chem1 Chem2 Chem3 Chem4
P to Enter 0.0600
P to Exit 0.0600
C C C C
H H H H
E E E E
M M M M
Step R2 MSE P 1 2 3 4
1 0.0000 226.314 . . . .
2 0.6745 80.3515 0.00 + . . . D
3 0.9725 7.47621 0.00 + A . . D
4 0.9823 5.33030 0.05 + A B . D
5 0.9787 5.79045 0.205 - A B . .

Resulting Stepwise Model


Variable Coefficient Std Error T P VIF
Constant 52.5773 2.28617 23.00 0.0000
Chem1 1.46831 0.12130 12.10 0.0000 1.1
Chem2 0.66225 0.04585 14.44 0.0000 1.1

Cases Included 13 R2 0.9787 MSE 5.79045


Missing Cases 0 Adjusted R2 0.9744 SD 2.40634

Variables Not in the Model


Correlations
Variable Multiple Partial T P
Chem3 0.8257 0.4113 1.35 0.2089
Chem4 0.9732 -0.4141 -1.37 0.2054

The first part of the report is a history of the stepwise process. It lists the variables
in the model for each step and presents a number of model statistics. In the
example report, the intercept-only model is listed as step 1. Chem4 is added at
step 2, Chem1 is added at step 3, Chem2 is added at step 4, and then Chem4 is
eliminated at step 5. The R2 and mean square error (MSE) are listed for each step.
The number in the P column at each step is the p-value for the selected variable (+)
or the eliminated variable (!).

At the end of the stepwise history, a complete coefficient table and model
summary statistics are presented for the final model. The final table in the report
lists the variables not in the final model. It lists the multiple and partial
correlations of each variable with the final model. The T and P columns list the t
value and p-value for each variable were it to be added to the final model.

The full report lists a complete coefficient table for each step. The example report
on the next page displays a full format report for the backward elimination

Chapter 8, Linear Models 191


stepwise regression for the Hald data.

Stepwise Linear Regression of Heat


Unforced Variables: Chem1 Chem2 Chem3 Chem4
P to Enter 0.0500
P to Exit 0.0600

Step Variable Coefficient T P R2 MSE


1 Constant 62.4054 0.89 0.9824 5.98295
Chem1 1.55110 2.08 0.0708
Chem2 0.51017 0.70 0.5009
Chem3 0.10191 0.14 0.8959
Chem4 -0.14406 -0.20 0.8441

2 Constant 71.6483 5.07 0.9823 5.33030


Chem1 1.45194 12.41 0.0000
Chem2 0.41611 2.24 0.0517
Chem4 -0.23654 -1.37 0.2054

3 Constant 52.5773 23.00 0.9787 5.79045


Chem1 1.46831 12.10 0.0000
Chem2 0.66225 14.44 0.0000

Resulting Stepwise Model


Variable Coefficient Std Error T P VIF
Constant 52.5773 2.28617 23.00 0.0000
Chem1 1.46831 0.12130 12.10 0.0000 1.1
Chem2 0.66225 0.04585 14.44 0.0000 1.1

Cases Included 13 R2 0.9787 MSE 5.79045


Missing Cases 0 Adjusted R2 0.9744 SD 2.40634

Variables Not in the Model


Correlations
Variable Multiple Partial T P
Chem3 0.8257 0.4113 1.35 0.2089
Chem4 0.9732 -0.4141 -1.37 0.2054

192 Statistix 10 User’s Manual


Logistic Regression
The Logistic Regression procedure is used when you’re interested in studying
how observed proportions or rates depend on particular independent variables. A
direct application of linear regression to proportions is often not satisfactory
because the fitted or predicted values may be less than 0 or greater than 1
(impossibilities for proportions). There may be other shortcomings as well.
Logistic regression provides a convenient alternative by examining the
relationships between the logistic transformation of the proportions and linear
combinations of the predictor (independent) variables. The estimation method is
maximum likelihood. Numerous model diagnostic options are available.

More background on logistic regression can be found in the section titled


Additional Background on Logistic and Poisson Regression on page 207. In
particular, you should be familiar with likelihood ratio tests (also known as
analysis of deviance tests or G2 tests) to make full use of this procedure.

Specification

Select the name of the dependent variable in the Variables list box and move it to
the Dependent Variable box.

If each case in your data represents a single observation, the dependent variable
contains only zeros and ones, and the Trials/Case Variable isn’t used. If some or
all of the cases in your data represent more than a single observation, then the
dependent variable contains the sum of the zeros and ones for all trials for each
case, and the Trials/Case Variable stores the total number of trials for each case.
When a Trials/Case Variable is used, the logistic regression is computed on the
ratio of the Dependent Variable to the Trials/Case Variable. This feature is used

Chapter 8, Linear Models 193


when indicator variables are used for the independent variables and trials can be
grouped by unique combinations of values for the independent variables.

Move the independent variables from the Variables list box to the Independent
Variables box. For weighted regression, move the variable containing the prior
case weights to the Weight Variable box.

In some circumstances, the regression coefficient for a term in the model is known
beforehand. Such a term is called an offset and can be “adjusted out” of the
model. The Offset Variable is subtracted from the linear predictor, so the offset
variable must be expressed on the linear predictor’s scale (logit scale).

Use the Fit Constant check box to have the constant fitted in the model (checked)
or have the model forced through the origin (not checked).

Logistic regression uses an iterative procedure (iterative reweighted least squares)


to obtain its maximum likelihood results. You specify the maximum number of
iterations performed before the procedure “gives up” if it hasn’t converged in the
Maximum Iterations edit control.

Iteration stops when the absolute change in the deviance between iterations is less
than the deviance convergence criterion you specify in the Convergence Criterion
edit control. Small values increase the estimation accuracy but may increase the
number of required iterations. The value of 0.01 is usually suitable for obtaining
deviances and coefficient estimates.

Data Up to 200 independent variables can be included in the model. If any values
Restrictions within a case are missing, the case is dropped (listwise deletion). If an independent
variable is too highly correlated with a linear combination of other independent
variables in the model (collinearity), it’s dropped from the model. Computation is
reinitiated with a new model in which the offending independent variable has been
dropped. Variables are dropped until such collinearity has been eliminated and
reliable computations can proceed. For each case, the ratio of the dependent
variable to the number of trials (success/trials) must always be bounded by 0 to 1.
If weighted regression is specified, the weight variable cannot contain negative
weights. Zero weights are treated as missing values.

Example The logit transformation is ln(p/(1!p)), where p is a proportion. The ratio p/(1!p)
is often interpreted as “odds”; for example, if p is the probability of success, then
1!p is the probability of failure and p/(1!p) is the odds for success. By relating
ln(p/(1!p)) to a linear combination of predictors, we’re assuming that the
predictors act in a multiplicative fashion to influence the odds p/(1!p). Remember
that logistic regression is relating the linear combination of predictors to
ln(p/(1!p)) and not to p, as the analysis specification may suggest.

194 Statistix 10 User’s Manual


Finney’s data from Pregibon (1981) are used for this example. The response was
whether or not vasoconstriction occurred in the skin of the digits; variable Occur
takes the values 1 or 0. There are two quantitative predictor variables—the rate
and volume of air inspired by the subject. For analysis, log rate and log volume
were used, variables LRate and LVol, respectively. The data are displayed below,
and is available in the file Sample Data\Vasoconstriction.sx.

Case Occur LRate LVol Case Occur LRate LVol


1 1 -0.1924 1.3083 21 0 0.6931 -0.9163
2 1 0.0862 1.2528 22 0 0.3075 -0.0513
3 1 0.9163 0.2231 23 0 0.3001 0.3001
4 1 0.4055 -0.2877 24 0 0.3075 0.4055
5 1 1.1632 -0.2231 25 1 0.5766 0.4700
6 1 1.2528 -0.3567 26 0 0.4055 -0.5108
7 0 -0.2877 -0.5108 27 1 0.4055 0.5878
8 0 0.5306 0.0953 28 0 0.6419 -0.0513
9 0 -0.2877 -0.1054 29 1 -0.0513 0.6419
10 0 -0.7985 -0.1054 30 0 -0.9163 0.4700
11 0 -0.5621 -0.2231 31 1 -0.2877 0.9933
12 0 1.0116 -0.5978 32 0 -3.5066 0.8544
13 0 1.0986 -0.5108 33 0 0.6043 0.0953
14 1 0.8459 0.3365 34 1 0.7885 0.0953
15 1 1.3218 -0.2877 35 1 0.6931 0.1823
16 1 0.4947 0.8329 36 1 1.2030 -0.2231
17 1 0.4700 1.1632 37 0 0.6419 -0.0513
18 1 0.3471 -0.1625 38 0 0.6419 -0.2877
19 0 0.0583 0.5306 39 1 0.4855 0.2624
20 1 0.5878 0.5878

The data are said to be ungrouped; each case is based on a single trial, so the
Trials/Case Variable is left empty. The logistic regression is specified on page
193.

The resulting coefficient table is displayed below.

Logistic Regression of Occur

Predictor
Variables Coefficient Std Error Coef/SE P
Constant -2.87494 1.30650 -2.20 0.0278
LRate 4.56100 1.81790 2.51 0.0121
LVol 5.17862 1.84314 2.81 0.0050

Deviance 29.23 Somers' D 0.874


P-Value 0.7807 C 0.937
Degrees of Freedom 36

Convergence criterion of 0.01 met after 5 iterations

Cases Included 39 Missing Cases 0

The standard measure of GLM fit is the deviance, also known as the G2 statistic.
Under the null hypothesis of the specified model, the deviance asymptotically
follows a chi-square distribution. You can think of it as a “distance measure”
between the fitted model and the actual data—the smaller, the better.

Chapter 8, Linear Models 195


Somers’ D is used to determine the strength and direction of relation between pairs
of observations. Its values range from -1.0 (all pairs disagree) to 1.0 (all pairs
agree). It is defined as (nc - nd) / t where nc is the number of pairs that are
concordant, nd the number of pairs that are discordant, and t is the number of total
number of pairs with different responses. C is a measure of the discrimination
ability of a model. It’s equivalent to the area under the ROC curve. C ranges from
0.5 (no discrimination) to 1.0 (perfect discrimination). It’s computed as (nc + 0.5(t
- nc - nd)) / t (Hanley and McNeil, 1982).

The p-value of 0.7807 suggests this model fits the data fairly well. Maybe it’s not
necessary to include both terms LVol and LRate in the model. When we run the
analysis using only LRate, we find that the deviance is 48.86 (p=0.0918). When
LVol is specified alone, the resulting deviance is 47.06 (p=0.1243). When we run
the analysis with no independent variables, the resulting deviance is 54.04
(p=0.0440).

The following analysis of deviance table summarizes the results (I = intercept, R =


log rate, V = log volume):

M odel Deviance Difference DF Component P-Value


I d I = 54.04 d I!d IVR = 24.81 2 V and R 0.000
I+R d IR = 48.86 dIR!d IRV= 19.63 1 V 0.000
I+V d IV = 47.06 d IV!d IRV= 17.83 1 R 0.000
I+R+V d IRV= 29.23

The column labeled Component shows which terms are being tested. The first row
tests whether LVol and LRate improve the intercept-only model. The deviance for
this test is the difference of the deviances for model I and model I+R+V, which is
54.04 ! 29.23 = 24.81. The associated value for the degrees of freedom is the
difference in the number of independent variables in the two models. The p-value
displayed in the last column is computed using the chi-square function in
Probability Functions (see Chapter 17). The second row tests whether LVol
improves the model when I and LRate are already in the model. The deviance for
this test is the difference of the deviances for model I+R and model I+R+V, which
is 48.86 ! 29.23 = 19.63. The LRate term is tested in a similar manner in the third
row. The conclusion from this analysis of deviance table is that both LVol and
LRate are needed in the model.

Before accepting this model, you should examine the regression diagnostics. You
should examine the standardized residuals and Cook’s D (or p-value) routinely.

196 Statistix 10 User’s Manual


Logistic Once the regression analysis is computed and displayed, the Results menu icon,
Regression which appears as a shovel (think “dig-deeper”), on the toolbar is enabled. Click on
Results Menu the icon to display the regression results menu shown below.

Select Coefficient Table from the menu to redisplay the regression coefficient
table. Like the Linear Regression procedure, Logistic regression offers the options
of saving various residual- and model-diagnostic statistics (see page 176) and
examining the variance-covariance matrix of the regression coefficients (see page
183). The remaining options are discussed below.

Classification The classification table is a 2x2 frequency table of actual and predicted responses:
Table The fitted logistic regression model is used to obtain the estimated value of p for
each case. According as the estimated value of p is less that 0.5 or greater than
0.5, the case is placed in the category 0 or 1. By then cumulating over cases, the
classification table is obtained. The classification table for Finney’s
vasoconstriction data is shown below.

Classification Table for Occur

Predictions
Actual 0 1 Total
0 14 5 19
1 2 18 20
TOTAL 16 23 39

Proportion of category 0 correctly classified 0.737


Proportion of category 1 correctly classified 0.900
Overall proportion correctly classified 0.821

Hosmer- The Hosmer-Lemeshow statistics are goodness-of-fit tests suitable for models with
Lemeshow a large number of covariate patterns (unique combinations of values for the
Statistic independent variables). These tests are illustrated here using data from Hosmer
and Lemeshow (1989, p. 92), a study whose object was to identify risk factors for
low birth weights of babies (the data are available in the file Sample
Data\Birthwt.sx). The response variable is Low: 0 = normal birth weight, 1 = low
birth weight.

Chapter 8, Linear Models 197


The table below shows the tests for the model found on page 101 of Hosmer and
Lemeshow.

Hosmer-Lemeshow Goodness of Fit Tests for Low

Decile of Risk
1 2 3 4 5 6 7 8 9 10
Low 0.07 0.09 0.16 0.21 0.28 0.33 0.42 0.49 0.61 0.94 Total
1 Obs 0 1 4 2 6 6 6 10 9 15 59
Exp 0.9 1.6 2.4 3.5 5.0 5.6 6.8 8.6 10.5 14.1 59
0 Obs 19 18 15 17 14 12 12 9 10 4 130
Exp 18.1 17.4 16.6 15.5 15.0 12.4 11.2 10.4 8.5 4.9 130
Total 19 19 19 19 20 18 18 19 19 19 189

Hosmer-Lemeshow Statistic (C) 4.78


P-Value 0.7804
Degrees of Freedom 8

Fixed Cut Points


Low 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 Total
1 Obs 2 5 8 9 11 9 7 5 2 1 59
Exp 2.7 4.5 8.3 8.7 10.7 8.8 6.5 5.3 1.7 1.9 59
0 Obs 38 25 25 16 13 7 3 2 0 1 130
Exp 37.3 25.5 24.7 16.3 13.3 7.2 3.5 1.7 0.3 0.1 130
Total 40 30 33 25 24 16 10 7 2 2 189

Hosmer-Lemeshow Statistic (H) 6.17


P-Value 0.6281
Degrees of Freedom 8

The tables are constructed by grouping the observations into ten groups based on
the values of the fitted values (estimated probabilities). Two grouping methods are
used as follows:

The first method groups the data based on percentiles of the fitted values, resulting
in a table with ten “deciles of risk”. The test statistic C = 4.78 is computed from
the observed and expected frequencies within each decile of risk for each outcome.
The p-value 0.7804 is computed from the chi-square distribution with 8 degrees of
freedom.

The values above each of the columns in the deciles-of-risk table represent the
highest fitted value for the column. Observations with the same covariate pattern
are forced into the same decile, which can result in some columns with zero
observed frequencies. In these cases, the statistic isn’t computed.

The second grouping method bases the groups on fixed cut points of the fitted
values. Both the C and H statistics indicate a good fit. See Hosmer and
Lemeshow (1989) for a detailed discussion.

Odds Ratios Select Odds Ratios from the results menu to obtain odds ratios and 95%
confidence intervals. The table of odds ratios from the logistic regression of the
birth weight data from Hosmer and Lemeshow (1989, p. 94) is presented on the
next page.

198 Statistix 10 User’s Manual


Logistic Regression Odds Ratios for Low

Predictor 95% C.I. 95% C.I.


Variables Lower Limit Odds Ratio Upper Limit
Age 0.91 0.97 1.05
LWT 0.97 0.98 1.00
Race1 1.26 3.54 9.91
Race2 1.00 2.37 5.59
Smoke 1.15 2.52 5.51
PTL 0.87 1.72 3.39
HT 1.62 6.26 24.23
UI 0.87 2.14 5.25

The odds ratios reported in Statistix give the change in the odds for an increase in
one unit of the independent variable. For the dichotomous variable Smoke, the
odds of a low birth weight baby are 2.52 greater for a mother who smokes (Smoke
= 1) than for a mother who doesn’t smoke (Smoke = 0). The odds ratio of 0.97 for
the continuous variable Age is for an increase of one year in age.

General Notes If fitted values are saved, the resulting values are the estimates of expected number
of successes E[N]. If you want estimates of p, first save the fitted values and then
divide them by the total number of trials per case. Estimates for the logits
ln(p/(1!p)) can be obtained from the estimated p’s.

Probit regression is a method very similar to logistic regression. Statistix does not
perform probit regression. For the vast majority of data sets, logistic and probit
analyses will return virtually identical results. We prefer logistic regression
because it can be calculated more efficiently and the logit transform has a simple
interpretation as the log of the odds ratio.

Chapter 8, Linear Models 199


Stepwise Logistic Regression
This procedure performs stepwise logistic regression. Logistic regression is
appropriate for dependent variables that are proportions. Both forward selection
and backward elimination are provided.

Specification

Select the name of the dependent variable in the Variables list box and move it to
the Dependent Variable box.

If each case in your data represents a single observation, the dependent variable
contains only zeros and ones, and the Trials/Case Variable isn’t used. If some or
all of the cases in your data represent more than a single observation, then the
dependent variable contains the sum of the zeros and ones for all trials for each
case, and the Trials/Case Variable stores the total number of trials for each case.

Divide your independent variables between the list boxes for Forced, Starting, and
Non-forced Indep. Vars. Forced variables appear in all steps of the stepwise
procedure and won’t be eliminated regardless of the elimination criteria. The
starting independent variables appear in the initial model and may be eliminated in
subsequent steps. The non-forced independent variables don’t appear in the initial
model but will be considered for selection.

For forward selection, move all your independent variables to the Non-forced

200 Statistix 10 User’s Manual


Indep. Vars box. For backwards selection, move all the independent variables to
the Starting Indep. Vars box. You can also use the Starting Indep. Vars box to
enter an initial model that you’ve previously found of interest or an initial model
that includes a variable that was overlooked in a previous stepwise regression.

A stepwise regression builds a regression model by repeating a process that adds


and deletes variables from a list of candidate variables. The stepwise process stops
when no variables not already in the model meet the selection criterion and no
variables in the model meet the elimination criterion.

At each step in the process, the variable whose addition decreases the deviance the
greatest is selected to enter the model next. A variable won’t be selected unless
the p-value for the deviance test (the difference of the deviance for the model
excluding the candidate variable and the deviance of the model including the
candidate variable) is less than the value you enter for the P To Enter criterion.
You can specify pure backward elimination by entering 0.0 for the P To Enter
criterion to prevent eliminated variables from reentering the model.

The variable whose removal from the model increases the deviance the least is
eliminated from the model at each step. A variable won’t be eliminated unless the
p-value for the deviance test is greater than the value you enter for the P To Exit
criterion. You can specify pure forward selection by entering 1.0 for the P To Exit
criterion to prevent selected variables from being eliminated later.

In some circumstances, the regression coefficient for a term in the model is known
beforehand. Such a term is called an offset and can be “adjusted out” of the
model. The Offset Variable is subtracted from the linear predictor, so the offset
variable must be expressed on the linear predictor’s scale (logit scale).

The remaining options let you choose between brief and full report formats, select
a weight variable for weighted regression, specify a model forced through the
origin, specify the maximum number of iterations, and enter a value for the
convergence criterion.

Data A total of 200 independent variables can be included in the model. If any values
Restrictions within a case are missing, the case is dropped (listwise deletion). For each case,
the ratio of the dependent variable to the number of trials (success/trials) must
always be bounded by 0 to 1.

Example We’ll use Finney’s vasoconstriction data described on page 195 to illustrate
forward selection. The dependent variable, Occur, is coded 1 if vasoconstriction
occurred in the skin and 0 if not. There are two candidate independent variables,
LRate and LVol. The analysis is specified on the preceding page. The results are
displayed below.

Chapter 8, Linear Models 201


Stepwise Logistic Regression of Occur
Unforced Variables: LRate LVol
P to Enter 0.0500
P to Exit 0.0500

Step Variable Coeffic ient Std Error Coef/SE Deviance Difference P


1 Constant 0.0 5129 0.32036 0.16 54.04

2 Constant -0.2 0458 0.36232 -0.56 47.06 6.98 0.0082


LVol 1.8 0788 0.77024 2.35

3 Constant -2.8 7494 1.30650 -2.20 29.23 17.83 0.0000


LVol 5.1 7862 1.84314 2.81
LRate 4.56100 1.81790 2.51

Resulting Stepwise Model


Variable Coefficient Std Error Coef/SE P
Constant -2.87494 1.30650 -2.20 0.0278
LVol 5.17862 1.84314 2.81 0.0050
LRate 4.56100 1.81790 2.51 0.0121

Deviance 29.23
P-Value 0.7807
Degrees of Freedom 36

Cases Included 39 Missing Cases 0

Three steps were made in arriving at the final model. Because no starting variables
were specified, the model in the first step was the intercept-only model. The
variable LVol was added in step 2. The difference of the deviances for the models
in step 1 and 2 is 6.98 (p=0.0082). The variable LRate was added in step 3. The
complete coefficient table and model statistics for the final model are listed at the
end of the report.

Computational See Hosmer and Lemeshow (1989) for a description of the stepwise algorithm and
Notes the deviance test. More background on logistic regression can be found in the
section titled Additional Background on Logistic and Poisson Regression on page
207.

Poisson Regression
The Poisson Regression procedure performs Poisson regression using the
maximum likelihood estimation method. It’s used when you’re interested in
examining how observed counts depend on particular independent variables. A
direct application of linear regression to counts often isn’t satisfactory because the
fitted or predicted values may be negative; this is impossible for counts. There
may be other shortcomings as well. Poisson regression provides a convenient
alternative. It examines the relationships between the log transformed counts and
linear combinations of the predictor (independent) variables.

More background on Poisson regression can be found in Additional Background


on Logistic and Poisson Regression on page 207. In particular, you should be
familiar with likelihood ratio tests (also known as analysis of deviance tests or G2
tests) to make full use of this procedure.

202 Statistix 10 User’s Manual


Specification

Move the dependent variable to the Dependent Variable box and the independent
variables to the Independent Variables box. As with Linear Regression, you can
specify a Weight Variable for prior case weights and force the model through the
origin (uncheck the Fit Constant check box).

In some circumstances, the regression coefficient for a term in the model is known
beforehand. Such a term is called an offset, and the offset option allows it to be
“adjusted out” of the model. The offset variable is subtracted from the linear
predictor, so the Offset Variable must be expressed on the linear predictor’s scale
(i.e., natural log of counts).

Poisson regression uses an iterative procedure (iterative reweighted least squares)


to obtain the maximum likelihood estimates. You can specify the Maximum
Iterations performed before the procedure “gives up” if it hasn’t converged.

Iteration stops when the absolute change in the deviance between iterations reaches
the deviance Convergence Criterion. Decreasing the criterion will increase the
estimation accuracy but may increase the number of iterations required. The
default value of 0.01 is usually suitable for obtaining deviances and coefficient
estimates. Decreasing the criterion appears to improve the accuracy of coefficient
standard errors and the regression diagnostics.

Data Up to 200 independent variables can be included in the model. If any values
Restrictions within a case are missing, the case is dropped (listwise deletion). If an independent
variable is too highly correlated with a linear combination of other independent
variables in the model (collinearity), it’s dropped from the model. Computation is
reinitiated with a new model in which the offending independent variable has been
dropped. Variables are dropped until such collinearity has been eliminated and
reliable computations can continue. If a prior weight variable is specified, the

Chapter 8, Linear Models 203


weight variable cannot contain negative weights. Zero weights are treated as
missing values.

Example Maxwell’s data, presented in Nelder and Wedderburn (1972), are used. The
analysis treats a 5 × 4 contingency table giving the number of boys (Boys) with
four different ratings for disturbed dreams in five different age categories:

Rating
Age group Age 4 3 2 1
5 - 7 1 7 3 4 7
8 - 9 2 13 11 15 10
10 - 11 3 7 11 9 23
12 - 13 4 10 12 9 28
14 - 15 5 3 4 5 32

Age has the values 1 - 5, and Rating the values 1 - 4. We’re interested in whether
there’s a linear × linear interaction of Age and Rating.

First we create indicator variables (dummy variables) for the main effects as if we
were using linear regression (see, for example, Weisberg 1985). There are 4 main
effect degrees of freedom for Age and 3 main effect degrees of freedom for Rating.
The main effect indicator variables R1, R2, and R3 for Rating are created using the
Indicator Variables procedure from the Data menu (discussed in Chapter 2). The
dialog box is displayed below.

The indicator variables A1, A2, A3, and A4 for Age are created in a similar
manner. The Rating × Age interaction is computed using the Transformation:
RXA = Rating ( Age

The data are stored in the file Sample Data\Dreams.sx, and are listed below.

204 Statistix 10 User’s Manual


Case Boys Age Rating A1 A2 A3 A4 R1 R2 R3 RXA DummyV
1 7 1 4 0 0 0 0 0 0 0 4 0
2 3 1 3 0 0 0 0 0 0 1 3 0
3 4 1 2 0 0 0 0 0 1 0 2 0
4 7 1 1 0 0 0 0 1 0 0 1 0
5 13 2 4 1 0 0 0 0 0 0 8 0
6 11 2 3 1 0 0 0 0 0 1 6 0
7 15 2 2 1 0 0 0 0 1 0 4 0
8 10 2 1 1 0 0 0 1 0 0 2 0
9 7 3 4 0 1 0 0 0 0 0 12 0
10 11 3 3 0 1 0 0 0 0 1 9 0
11 9 3 2 0 1 0 0 0 1 0 6 0
12 23 3 1 0 1 0 0 1 0 0 3 0
13 10 4 4 0 0 1 0 0 0 0 16 0
14 12 4 3 0 0 1 0 0 0 1 12 0
15 9 4 2 0 0 1 0 0 1 0 8 0
16 28 4 1 0 0 1 0 1 0 0 4 0
17 3 5 4 0 0 0 1 0 0 0 20 0
18 4 5 3 0 0 0 1 0 0 1 15 0
19 5 5 2 0 0 0 1 0 1 0 10 0
20 32 5 1 0 0 0 1 1 0 0 5 1

We’ll fit two models—one without the interaction and one with the interaction.
The main effects only model is computed first. The model is specified in the
dialog box on page 203. The results are presented in the coefficient table below.

Poisson Regression of Boys

Predictor
Variables Coefficient Std Error Coef/SE P
Constant 1.32623 0.26102 5.08 0.0000
R1 0.91629 0.18708 4.90 0.0000
R2 0.04879 0.22093 0.22 0.8252
R3 0.02469 0.22224 0.11 0.9115
A1 0.84730 0.26082 3.25 0.0012
A2 0.86750 0.26004 3.34 0.0008
A3 1.03302 0.25410 4.07 0.0000
A4 0.73967 0.26522 2.79 0.0053

Deviance 32.46
P-Value 0.0012
Degrees of Freedom 12

Convergence criterion of 0.01 met after 4 iterations

Cases Included 20 Missing Cases 0

The deviance for this model is 32.46 with 12 degrees of freedom; this main effects
only model appears to fit poorly (p = 0.0012).

Next we fit the main effects plus linear interaction model by adding the RXA term.
This model fits much better (deviance = 14.08, df = 11, p = 0.2288). The con-
tribution to the deviance due to the interaction RXA is 32.46 - 14.08 = 18.38,
which is clearly significant (it’s treated as a chi-square statistic with 1 df). From
the coefficient table, the estimated linear × linear interaction is -0.2051. Nelder
and Wedderburn conclude “that the data are adequately described by a negative
linear × linear interaction (indicating that the dream rating tends to decrease with
age)”.

Chapter 8, Linear Models 205


If we look at the regression diagnostics, Cook’s distance calls attention to the cell
in the lower right; the count in Age = 5, Rating = 1 is a bit high, and Cook’s
distance indicates this is a rather influential case. Let’s see what happens when
this point is fitted separately. To do this, a new variable is created using the
Transformation:
If Age=5 AND Rating=1 Then DummyV = 1 Else DummyV = 0

When the model: Boys = R1 R2 R3 A1 A2 A3 A4 RXA DummyV is fitted, the


new deviance is 9.58 (10 degrees of freedom, p = 0.4781). The RXA interaction
slope is now -0.13758. The change in deviance due to the DummyV term is 4.50 =
14.08 - 9.58, which can be treated as a chi-square statistic with 1 df. The p-value
of 0.034 (from Probability Functions) is small enough to make you suspect that the
lower right corner does require special attention. This isn’t the definitive analysis
of this data set; the point is just to show the value of using the regression
diagnostics to gain a better understanding of the data.

We could have fitted the main effects only model a little easier using the Log
Linear Models procedure in Chapter 11. Log Linear Models is especially suited
for fitting models with only qualitative predictors; the drawback of using Poisson
regression is that you must manually create the indicator variables. However, Log
Linear Models can’t deal with quantitative predictors and so, for example, they
couldn’t compute the linear × linear interaction RXA. Poisson regression must be
used when you need the standard errors of the coefficients.

Poisson Once the regression analysis is computed and displayed, the Results menu icon,
Regression which appears as a shovel (think “dig-deeper”), on the toolbar is enabled. Click on
Results Menu the icon to display the regression results menu shown below.

Select Coefficient Table from the menu to redisplay the regression coefficient
table. Poisson regression offers the options of saving various residual- and
model-diagnostic statistics and examining the variance-covariance matrix of the
regression coefficients. These options are performed in the same manner as in
Linear Regression (pages 176 and 183).

General By relating ln(Y) to a linear combination of predictors, we’re assuming that the
Comments predictors act in a multiplicative fashion to influence the counts Y. Remember that
Poisson regression is relating the linear combination of predictors to ln(Y) and not
Y, as the analysis specification might suggest.

206 Statistix 10 User’s Manual


Additional Background on
Logistic and Poisson Regression
Analyzing For those unfamiliar with logistic and Poisson regression, the following sections
Proportions give you a brief background on why and when these procedures should be used.
and Counts
The analysis of counts and proportions is the objective of discrete, or categorical,
data analysis. The theory of analysis of variance and regression is more mature
than methods for discrete data analysis. So it’s natural that many of the techniques
for discrete data analysis are based on ideas from analysis of variance and
regression.

The typical application of least squares procedures [analysis of (co)variance or


regression] usually involves the assumption that the dependent variable is some
quantity that can be measured on a continuous scale, such as millimeters or grams.
It’s also usually assumed that the random errors in the dependent variable are
independent normal random variables with identical variances, perhaps after
suitable transformations or weightings. The goal of analysis, then, is to examine
whether the dependent variable is influenced by the independent variables of
interest.

Problems arise when least squares is applied to data sets where the dependent
variable is discrete counts or proportions that arose from discrete counts. Suppose
you were to fit a simple linear regression model p = a + bx to a data set, where the
p’s are proportions. You can always find values of x where p is less than 0 or
greater than 1, a clearly undesirable situation for a model of proportions.
Likewise, if you fit the model c = a + bx where now c is discrete count data, you
can find values of x that result in negative predicted counts, also undesirable.

You can avoid these problems by transforming the dependent data. For example,
when the data are proportions, the logistic transformation ln [p/(1-p)] creates a new
variable that ranges from !4 to +4. With count data, you can achieve a similar
scaling by converting the data to the log scale. We’re then interested in how a
linear combination of the independent variables is related to the transformed data.
This involves estimating the slope coefficients and testing their significance.
However, even after the data have been transformed, the usual application of least
squares for estimation often doesn’t work very well because the errors don’t
closely approximate the usual assumption of identical variances.

The application of least squares to transformed discrete data can be improved by


using various weighting schemes to adjust for unequal variances. A classic
example of such a procedure is minimum logit C2, described in Snedecor and
Cochran (1980). These techniques should be viewed as approximations to the
preferred method of estimation, which is maximum likelihood (ML) estimation.
These approximate methods work well in some situations and poorly in others.

Chapter 8, Linear Models 207


The appeal of the approximate methods has been that they are easy to calculate
with traditional methods.

Efficient general algorithms for ML estimation of linear models fitted to


transformed discrete data are now available and are part of a larger body of theory
known as generalized linear models, or GLM’s. Statistix uses these procedures to
make it possible to perform ML estimation with no more effort than required for a
traditional least squares analysis. If you have count data, we suggest you use the
appropriate ML procedure. This may require some background reading if you’re
not familiar with ML procedures, especially likelihood ratio tests. If you
understand how F tests are performed in the usual regression situation, likelihood
ratio tests are easy. The ML procedures always work as well as the approximate
weighted least squares procedures. More important, the ML procedures will work
well in situations where least squares fails dismally.

If you’re interested in proportions, Logistic Regression is the appropriate


procedure. If you have count data, consider either Poisson Regression or Log-
Linear Models. If you have count data and your independent variables include
continuous variables (“covariates”), use Poisson Regression. More detail to help
you decide which to use is found within each of the respective descriptions.

There are many good references on discrete data analysis. Bishop et al. (1975)
give a thorough treatment of discrete analysis for categorical designs, the discrete
analogs to analysis of variance. Fienberg (1977) gives a concise, readable account
of such models. McCullagh and Nelder (1983) consider a broader class of models,
including those with continuous variates, the analogs to regression or analysis of
covariance. Cox (1970) and Hosmer and Lemeshow (1989) are good references
for the logistic model. The references in these books can direct you to more
specific areas of interest.

Generalized The procedures for Logistic Regression and Poisson Regression are based on the
Linear Models theory of generalized linear models (McCullagh and Nelder 1983), or GLM’s. The
class of models included in GLM’s is quite rich; multiple, logistic, and Poisson
regression are the most commonly encountered members of GLM’s, but many
others are included.

Many of the statistics in linear regression have generalized analogs in GLM’s. A


distinction is that many of the statistics for GLM’s in general are justified by large
sample approximations, while the statistics for normal theory linear regression are
“exact”. The performance of these approximations is an active field of
investigation.

First we’ll discuss the statistics displayed on the coefficient table. The standard
measure of GLM fit is the deviance, also known as the G2 statistic. Under the null
hypothesis of the specified model, the deviance asymptotically follows a chi-
square distribution. The deviance plays a role similar to that of the residual error

208 Statistix 10 User’s Manual


in linear regression. You can think of it as a “distance measure” between the fitted
model and the actual data—the smaller, the better. When used as an overall
goodness-of-fit measure, the corresponding p-value should be interpreted with
caution; it’s best viewed as simply a convenient way to standardize the deviance
for comparison purposes. A more traditional goodness-of-fit measure is Pearson’s
chi-square, which Statistix does not display (it’s easy to compute from residual
results if desired). Pearson’s chi-square has the same asymptotic distribution as
the deviance (under the null hypothesis); the reason for preferring the deviance is
that the deviance can be used to construct a stepwise analysis of deviance table
similar to the stepwise analysis of variance table displayed in linear regression.
Pearson’s chi-square doesn’t lend itself as readily to this use.

Stepwise analysis of variance tables and their uses are described in Linear
Regression. Analogous analysis of deviance tables can be constructed for
Logistic and Poisson Regression (see examples). Understanding the construction
and interpretation of such tables is essential to performing regression analyses
properly. Statistix doesn’t automatically generate analysis of deviance tables
because the structure of these tables often depends on the goals of the investigation
imposed by the subject matter. However, such tables are easy to construct by hand
from the displayed results.

In addition to the model deviance, Coef/SE statistics and associated p-values are
displayed on the coefficient table. Again, these are justified by large sample
theory. Better tests for the individual contributions of independent variables can
be constructed as 1 degree of freedom tests using the deviance (see examples).

Direct analogies exist for most of the residual diagnostics you can select from the
residuals and fitted values menu in linear regression. The computation of leverage
is as described in McCullagh and Nelder (1983) and Pregibon (1981). The
standardized residual, r, is described in McCullagh and Nelder (eq. 11.1); we
describe its calculation at the end of this section on page 211. Leverage values, h,
are the diagonal elements of the H matrix described by McCullagh and Nelder
(sect. 11.3).

If you’re interested in Pearson (chi-square) residuals rather than standardized


residuals, you can calculate them as r(1 ! h)½ using Transformations. If you square
the Pearson residuals and then sum them, you’ll obtain the Pearson chi-square
goodness-of-fit statistic for the model. Cook’s distance is computed as described
in McCullagh and Nelder (1983) and Pregibon (1981). Computing the “p-value”
for Cook’s D should simply be regarded as converting D to an alternate scale that
may help make influential points more easily recognizable. The method for
computing the “p-value” of Cook’s D is the same as that used in linear regression.
The outlier t-statistic and its p-value should be regarded as experimental; at this
point, a cautious interpretation of these statistics is that they are monotonic
transforms of the standardized residuals (Weisberg 1985) and should be sensitive
to outliers.

Chapter 8, Linear Models 209


Aliasing Parameters being estimated are said to be aliased if the associated independent
variables are (nearly) linear combinations of other independent variables. Another
name for this is collinearity. As in linear regression, aliased independent variables
in logistic and Poisson regression are successively detected and dropped until the
remaining independent variables constitute a linearly independent set.

Some special considerations come into play when this variable-dropping technique
is used for logistic and Poisson regression. The first is what’s called “saw-
toothing”. As the iterative fitting process used for GLM’s proceeds, it occasionally
happens that the information in a parameter diminishes to the point that the
procedure detects it as being aliased, even though it really isn’t. Saw-toothing can
be recognized as occurring when a variable is dropped sometime after the first
fitting cycle, along with a substantial increase in the deviance to the next cycle.
When this is observed, the deviance from the cycle immediately before the one in
which the variable is dropped should be used.

There’s a related issue that you should be aware of. This is the potential
interaction of dropping aliased variables and missing values. A case isn’t included
in the analysis if any of the values for any of the variables (dependent, indepen-
dents, or weight) used in the analysis are missing values. Thus, if an independent
variable with missing values is detected as being aliased and is dropped, new cases
may be pulled into the analysis in the next cycle. This makes use of all of the
available data, but it also means the deviance may be based on an expanded set of
cases as iteration continues and hence isn’t truly comparable from cycle to cycle.
This will usually not be of concern, but it’s a point you need to know. If this
behavior influences the results, it can be controlled by using Omit/Restore Cases
to select case subsets for analysis.

Infinite A potential problem with the logistic transformation occurs when the estimate of p
Parameter is very near to 0 or 1. The fitted logit ln[p/(1!p)] then approaches minus or plus
Estimates infinity, which can obviously cause computational problems. A similar problem
occurs in Poisson regression when the fitted count is very near 0; the fitted log
count approaches minus infinity. In these cases, the parameter estimates in the
linear predictors approach infinity. Statistix prevents potential computational
problems in such cases by enforcing an upper bound on the absolute value of the
fitted values of the linear predictors.

This approach appears to work quite well, and biases are generally small.
However, when you know in advance that some parameter estimates are infinite,
it’s better to drop the corresponding data cases from the analysis. An example
would be to delete all rows and columns in two-way contingency tables that have
marginal sums of zero. The major negative consequence of not deleting such cases
is that the deviance may be distorted downward relative to the degrees of freedom.

210 Statistix 10 User’s Manual


Equations for The following section outlines some of the equations used for various quantities in
Linear, multiple, logistic, and Poisson regression. The matrix X represents the design
Logistic, and matrix of predictor variable values. The prior weights are the weights specified by
Poisson the weighting variable option. RMS stands for residual mean square. In logistic
Regression regression, p is the expected proportion, or E[p] (the expected logit is ln [p/(1!p)]).
The various terms are described in more detail in McCullagh and Nelder (1983).
The number of cases used for estimation is n. The number of parameters in the
linear model is m (m includes the intercept, if present).

Description Linear Logistic Poisson


prior weight w w w
s = scale parameter σ2 1 1
2
σ = E[RMS]
f = variance function 1 nπ(1 ! π) M
π = E[p] M= expected count
q = iterative
weight = fw w nπ(1 ! π)w Mw
v = variance of an
observation = sf/w σ2/w nπ(1 ! π)/w M/w

Let C = (XTQX)!1
variance of estimated coefficients V(B*) = sC
variance of a fitted value of the linear predictor V(f) = sxTCx
h = leverage = q xTCx
(“unusualness” or “distance” is fxTCx)

Let e = raw residual (observation minus fitted or predicted value)


Standardized residual = r = e[((sf) / w )(1 ! h)]!½
outlier t-statistic = r [(n ! m ! 1) / (n ! m ! r2)]!½
Cook’s D = (r2 / m) (h / (1 ! h))

Two Stage Least Squares Regression


The Two Stage Least Squares Regression procedure (2SLS) is used to estimate a
linear equation when one or more of the predictor variables, or right hand side
variables, is an endogenous variable. An endogenous variable is one that’s
determined by the system of equations being solved. For example, quantity and
price of a product are both endogenous variables determined by a system of two
simultaneous equations: the demand curve and the supply curve. The 2SLS model

Chapter 8, Linear Models 211


requires at least one exogenous variable. An exogenous variable is one whose
value is determined outside the system of equations.

Specification First select the name of the dependent variable (response variable) and move it to
the Dependent Variable box.

Then select one or more right hand variables and copy them to the Right Hand
Variables list box. The right hand variables are analogous to the independent
variables in ordinary least squares regression and will be listed in the coefficient
table.

Next select one or more exogenous variables and copy them to the Exogenous
Variables list box. At least one of the exogenous variables must not be among
those selected for the Right Hand Variables.

Use the Fit Constant check box to specify a constant fitted model (checked) or a
model forced through the origin (not checked). Press the OK button.

Data Up to 200 variables (endogenous plus exogenous variables) can be specified. At


Restrictions least one exogenous variable that’s not also a right hand variable must be
specified. If there are missing values for any of the variables for a case, the entire
case is deleted (listwise deletion).

Example The purpose of the example data is to estimate the world demand for copper
(Maurice and Thomas, 2002). The data are listed on the next page, and are
available in the file Sample Data\Copper.sx.

212 Statistix 10 User’s Manual


The variables are Q: quantity sold; PC: price of copper; MM: per capita income;
PA: price of aluminum (an alternative to copper); X: inventory of copper; and T:
time as a proxy for technology. Consider the simultaneous functions of the
demand and supply of copper:
Q = f (PC, MM, PA)
Q = f (PC, X, T)

Case Q PC MM PA X T
1 3173.0 26.56 0.70 19.76 0.97679 1
2 3281.1 27.31 0.71 20.78 1.03937 2
3 3135.7 32.95 0.72 22.55 1.05153 3
4 3359.1 33.90 0.70 23.06 0.97312 4
5 3755.1 42.70 0.74 24.93 1.02349 5
6 3875.9 46.11 0.74 26.50 1.04135 6
7 3905.7 31.70 0.74 27.24 0.97686 7
8 3957.6 27.23 0.72 26.21 0.98069 8
9 4279.1 32.89 0.75 26.09 1.02888 9
10 4627.9 33.78 0.77 27.40 1.03392 10
11 4910.2 31.66 0.76 26.94 0.97922 11
12 4908.4 32.28 0.79 25.18 0.99679 12
13 5327.9 32.38 0.83 23.94 0.96630 13
14 5878.4 33.75 0.85 25.07 1.02915 14
15 6075.2 36.25 0.89 25.37 1.07950 15
16 6312.7 36.24 0.93 24.55 1.05073 16
17 6056.8 38.23 0.95 24.98 1.02788 17
18 6375.9 40.83 0.99 24.96 1.02799 18
19 6974.3 44.62 1.00 25.52 0.99151 19
20 7101.6 52.27 1.00 26.01 1.00191 20
21 7071.7 45.16 1.02 25.46 0.95644 21
22 7754.8 42.50 1.07 22.17 0.96947 22
23 8480.3 43.70 1.12 18.56 0.98220 23
24 8105.2 47.88 1.10 21.32 1.00793 24
25 7157.2 36.33 1.07 22.75 0.93810 25

The demand is a function of PC, MM, and PA. Since price (PC) is determined by
both the demand function and the supply function, it’s a endogenous variable.

The model is specified on the preceding page. The variables PC, MM, and PA are
copied to the Right Hand Variables box. The right hand variables MM and PA are
exogenous variables, so we copy them to the Exogenous Variables box too. The
variables X and T aren’t right hand variables in the demand function, but only
appear in the supply function. Thus, they are exogenous variables, so we copy
them to the Exogenous Variables box. At least one such variable is required, or
the model is said to be under-identified. The results are shown below.

Two Stage Least Squares Regression of Q


4 Exogenous Variables: MM, PA, X, T

Predictor
Variables Coefficient Std Error T P
Constant -6837.83 1264.46 -5.41 0.0000
PC -66.4950 31.5338 -2.11 0.0472
MM 13997.7 1306.34 10.72 0.0000
PA 107.662 44.5098 2.42 0.0247

R² 0.9421 Mean Square Error (MSE) 184327


Adjusted R² 0.9339 Standard Deviation 429.333

Cases Included 25 Missing Cases 0

Chapter 8, Linear Models 213


The summary statistics reported have the same interpretation as with ordinary least
squares regression. The negative sign of the coefficient for PC means that the
quantity of copper sold decreases with increasing price of copper, which is what
we’d expect.

2SLS Results Once the regression analysis is computed and displayed, the Results menu icon,
Menu which appears as a shovel (think “dig-deeper”), on the toolbar is enabled. Click on
the icon to display the 2SLS results menu shown below.

Select Coefficient Table from the menu to redisplay the regression coefficient
table. Like the Linear Regression procedure, 2SLS regression offers the options of
computing the Durbin-Watson statistic for autocorrelation, (see page 172),
displaying residual plots (see page 175), saving fitted values and residuals (see
page 176), and examining the variance-covariance matrix of the regression
coefficients (see page 183).

Computational Please see Griffiths et al. (1993) for details.


Notes

Eigenvalues-Principal Components
The Eigenvalues-Principal Components procedure displays the eigenvectors and
eigenvalues for a list of variables. You can also save principal components as new
variables.

Specification The dialog box appears on the next page. Select the variables you want to analyze
and move them to the Eigenvalues Variables list box. To weight the analysis,
move the name of the variable that contains the values to be used as weights to the
Weight Variable box. The eigenvalues can be based on the correlation matrix or
the covariance matrix. Select the method you want by clicking on one of the
Computational Method radio buttons.

214 Statistix 10 User’s Manual


Data Up to 200 variables can be specified. If there are missing values for any of the
Restrictions variables for a case, the entire case is deleted (listwise deletion). Negative weights
aren’t allowed.

Example We use the Hald data from Draper and Smith (1966) for this example. The same
data are used to illustrate Linear Regression and are listed on page 165. The
variables Chem1, Chem2, Chem3, and Chem4 are the percentages of four
chemical compounds measured in batches of cement. The analysis is specified on
the preceding page. The results are presented below.

Eigenvalues / Eigenvectors Based on Correlation Matrix

Cumulative
Percent of Percent of
Eigenvalues Variance Variance
1 2.23570 55.9 55.9
2 1.57607 39.4 95.3
3 0.18661 4.7 100.0
4 1.624E-03 0.0 100.0

Vectors
Factor 1 2 3 4
Chem1 0.4760 0.5090 0.6755 0.2411
Chem2 0.5639 -0.4139 -0.3144 0.6418
Chem3 -0.3941 -0.6050 0.6377 0.2685
Chem4 -0.5479 0.4512 -0.1954 0.6767

Eigenvalue analysis is interesting in its own right as a way to analyze multivariate


data structure (Morrison 1977, chap. 8). It’s also an important supplement to
multiple regression analysis (Chatterjee and Price 1991).

Chapter 8, Linear Models 215


Principal After viewing the resulting eigenvalues and eigenvectors, click on the Results
Components menu icon, which appears as a shovel (think “dig-deeper”), and select Principal
Components to display the principal components dialog box. Then enter variable
names for the principal components.

The dialog box on the next page creates three new variables PChem1, PChem2,
and PChem3 to store the first three principal components. The principal
component with the largest eigenvalue (variance component) comes first, the next
largest second, and so on.

In some regression data sets, the independent variables may be highly correlated
with one another. When such collinearity exists, estimates of the regression
coefficients may be unstable and can lead to erroneous inferences. If this is the
case, it’s sometimes useful to perform the regression on a set of principal
components. The computational advantage of using principal components rather
than the original data is that they’re all uncorrelated (they’re said to be orthogonal).
Chatterjee and Price (1991) give a nice example of the application of this method.

Typically, the principal components corresponding to the largest eigenvalues (i.e.,


largest variance components) are used for the regression. Jolliffe (1982) makes the
point that this isn’t always a wise approach.

Generally, it’s best to use the correlation matrix to compute the eigenvalues
because in effect this assigns equal weight to each variable. If the covariance
matrix is used, the results depend on the scales on which the original variables
were measured.

Computational The correlation or covariance matrix is first computed using the method of
Notes updating (see Correlations). The resulting matrix is converted to tridiagonal form
using Householder reductions. The eigenvalues and eigenvectors are then
extracted using the QL decomposition. Details on these methods are described by

216 Statistix 10 User’s Manual


Martin et al. (1968) and Bowdler et al. (1968).

In matrix notation, the principal components are calculated as XU, where X is an n


× p matrix derived from the original data and U is a p × m matrix of eigenvectors.
The number of usable cases is n, the number of variables in the original variable
list is p, and m is the number of eigenvectors retained. If the calculations were
based on the correlation matrix, the data in X are the original data after
Studentization (the means subtracted from the values and then divided by their
standard deviations). If the calculations are based on the covariance matrix, X is
the original data with the means subtracted.

Chapter 8, Linear Models 217


C H A P T E R

9
Analysis of Variance
The analysis of variance menu offers you a wide variety of AOV designs. Each of
the designs listed on the menu provides a specific dialog box relating to that design
making it easy to specify the model. The dialog box for the General AOV/AOCV
procedure offers the flexibility of specifying the model directly by entering a
model statement, and listing covariates for analysis of covariance.

All of the AOV procedures, with the exception of the Balanced Lattice Design, can
handle unbalanced designs (data with missing values). Unbalanced designs, and
designs with covariates, are solved using general linear models (GLM) techniques.
The sums of squares listed in the AOV tables for unbalanced designs and designs
with covariates are marginal sums of squares, also called type III sums of squares.
These are the correct sums of squares to use when constructing F-tests that test the
hypothesis that the means for the term in question are equal, given that the
remaining terms are in the model. An important limit on the level of
unbalancedness is that any interaction term included in the model must have all
cells filled (no cells empty).

All of the analysis of variance procedures include a results menu.

Once the analysis is computed and the AOV table is displayed, the Results menu
icon, which appears as a shovel (think “dig-deeper”), on the toolbar is enabled.
The menu offers numerous powerful options including multiple comparisons,
linear contrasts, polynomial contrasts, means plot, and residual plots. These
options are discussed in detail on page 249.

218 Statistix 10 User’s Manual


Completely Randomized Design
This procedure computes the analysis of variance for the completely randomized
design. As implied by the name, the allocation of treatments to the experimental
units is performed completely at random. The advantages of this design are that
the number of treatments is flexible, the loss of information because of missing
values is relatively low, and the degrees of freedom for error is maximum. The
disadvantage is that it’s often inefficient because the experimental error includes
all the variation between experimental units except that due to treatments.

The Completely Randomized Design also goes by the name One-Way Design.
This procedure produces the same results as the One-Way AOV procedure
discussed in Chapter 6.

Specification

The observed data must be entered into a single variable to use this procedure.
Move the variable name containing the observed data to the Dependent Variables
box. If you specify more than one dependent variable, a separate analysis is
produced for each variable. A second variable identifying the different treatment
groups is also required. Move the grouping variable to the Treatment Variable
box. Press the OK button to start the analysis.

Data Up to ten dependent variables can be specified. Sample sizes within treatment
Restrictions levels can be unequal. The maximum number of treatment levels is 500. The
treatment variable can be of any data type. Real values are truncated to whole
numbers. Strings are truncated to 20 characters.

Example The example data are from Steel and Torrie (1980, p. 139). Nitrogen content was
measured from six strains of red clover. Five plots in a greenhouse were randomly
assigned to each strain. The data are displayed below, and are stored in the file
Sample Data\Red clover.sx.

Chapter 9, Analysis of Variance 219


Case Nitrogen Strain Case Nitrogen Strain
1 19.40 3DOk1 16 20.70 3DOk7
2 32.60 3DOk1 17 21.00 3DOk7
3 27.00 3DOk1 18 20.50 3DOk7
4 32.10 3DOk1 19 18.80 3DOk7
5 33.00 3DOk1 20 18.60 3DOk7
6 17.70 3DOk5 21 14.30 3DOk13
7 24.80 3DOk5 22 14.40 3DOk13
8 27.90 3DOk5 23 11.80 3DOk13
9 25.20 3DOk5 24 11.60 3DOk13
10 24.30 3DOk5 25 14.20 3DOk13
11 17.00 3DOk4 26 17.30 Composite
12 19.40 3DOk4 27 19.40 Composite
13 9.10 3DOk4 28 19.10 Composite
14 11.90 3DOk4 29 16.90 Composite
15 15.80 3DOk4 30 20.80 Composite

The analysis is specified on the preceding page. The results are shown below.

Completely Randomized AOV for Nitrogen

Source DF SS MS F P
Strain 5 847.05 169.409 14.37 0.0000
Error 24 282.93 11.789
Total 29 1129.97

Grand Mean 19.887 CV 17.27

Homogeneity of Variances F P
Levene's Test 1.84 0.1423
O'Brien's Test 1.35 0.2762
Brown and Forsythe Test 0.93 0.4794

Welch's Test for Mean Differences


Source DF F P
Strain 5.0 16.66 0.0001
Error 10.8

Component of variance for between groups 31.5241


Effective cell size 5.0

Strain Mean
3DOk1 28.820
3DOk5 23.980
3DOk4 14.640
3DOk7 19.920
3DOk13 13.260
Composite 18.700
Observations per Mean 5
Standard Error of a Mean 1.5355
Std Error (Diff of 2 Means) 2.1715

A standard analysis of variance table is displayed first. Note that the F test
suggests a substantial between-groups (strains of red clover) effect, with a p-value
less than 0.0001. The coefficient of variation (CV) expresses the experimental
error as a percentage of the mean; the higher the CV value, the lower the reliability
of the experiment.

The F test assumes that the within-group variances are the same for all groups.
Three homogeneity of variances tests appear below the AOV table to test this
assumption. The null hypothesis of these tests is that the variances are equal. A
large F test and corresponding small p-value (say, smaller than 0.05) is evidence

220 Statistix 10 User’s Manual


that there are differences. All three tests agree that the variances are equal in our
example. See Levene (1960), O’Brien (1981), and Brown and Forsythe (1974).

The Welch’s test for mean differences (also called Welch’s ANOVA) is an
alternative to the F test in the standard AOV table appropriate for analyses where
homogeneity of variances are suspect (Welch, 1951).

A fixed-effects model (Type I) is appropriate for these data. If a random-effects


model were appropriate (Type II), the component of variance for between groups
may be of interest (see Snedecor and Cochran, chap. 13). The between-groups
variance component and effective cell sample size are displayed below the equality
of variance tests. The computation of effective cell size is described on page 246
of Snedecor and Cochran.

The bottom portion of the report lists a table of treatment means, sample sizes, and
standard errors of the means. The standard error of the difference of two means is
reported when the sample sizes are equal.

Randomized Complete Block Design


The Randomized Complete Block (RCB) design is used to reduce experimental
error by dividing the experimental units into blocks of units that are thought to be
similar. The object of blocking is to minimize the variability within the blocks,
and maximize the variability between the blocks. Blocks in the RCB design are of
equal size, each of which contains all the treatments. This procedure handles
missing values by using the GLM technique to compute marginal sums of squares.

This procedure is used to analyze single-factor RCB designs (i.e., one treatment
factor). Multiple-factor experiments in a RCB design can be analyzed using the
Factorial Design procedure discussed on page 229.

Specification

Chapter 9, Analysis of Variance 221


Move the name of the variable containing the observed data to the Dependent
Variables box. If you specify more than one dependent variable, a separate
analysis is produced for each variable. Move the variable that identifies blocks to
the Block Variable box. Move the variable that identifies treatments to the
Treatment Variable box. Press the OK button to start the analysis.

Data Up to ten dependent variables can be specified. The maximum number of block
Restrictions and treatment levels are 500 each. The block and treatment variables can be of any
data type. Missing values are allowed.

Example The example data are from Snedecor and Cochran (1980, sect. 14.2). The
dependent variable is the number of soybeans out of 100 that failed to emerge, and
the treatments are various fungicides (the first treatment level was a no-fungicide
control). The data are listed below, and are stored in the file
Sample Data\Soybeans.sx.

Case Failures Trt Blk Case Failures Trt Blk


1 8 1 1 14 8 3 4
2 10 1 2 15 10 3 5
3 12 1 3 16 3 4 1
4 13 1 4 17 5 4 2
5 11 1 5 18 9 4 3
6 2 2 1 19 10 4 4
7 6 2 2 20 6 4 5
8 7 2 3 21 9 5 1
9 11 2 4 22 7 5 2
10 5 2 5 23 5 5 3
11 4 3 1 24 5 5 4
12 10 3 2 25 3 5 5
13 9 3 3

You can use the Transformations Cat function to generate repetitive sequences,
such as those seen for Trt and Blk. After entering the 25 values for Failures, we
can use the Transformation expressions Trt = Cat(5,5) and Blk = Cat(5,1) to create
these variables.

The model is specified in the dialog box on the preceding page. The results are
presented in the table on the next page. The analysis of variance table appears first
in the report. An F test and the associated p-value are listed for the treatment
variable Trt. The test suggests a between-fungicides effect (p = 0.0219).

Tukey’s one degree of freedom test for nonadditivity is useful when the
experimental design only permits an additive model to be fitted to the data but you
suspect that interaction is present. There’s little suggestion of nonadditivity (p =
0.6150) in this example. If nonadditivity is present, you should consider
transforming your data in an effort to remove it.

The object of using blocks is to increase efficiency by reducing the error mean
square. The relative efficiency indicates the magnitude to which blocking
succeeded in reducing experimental error. In this example, the relative efficiency

222 Statistix 10 User’s Manual


of using the RCB design over the completely randomized design is 1.19, which is a
19% increase in precision. See Gomez and Gomez (1984) for computational
details.

Randomized Complete Block AOV Table for Failures

Source DF SS MS F P
Blk 4 49.840 12.4600
Trt 4 83.840 20.9600 3.87 0.0219
Error 16 86.560 5.4100
Total 24 220.240

Grand Mean 7.5200


CV 30.93

Tukey's 1 Degree of Freedom Test for Nonadditivity


Source DF SS MS F P
Nonadditivity 1 1.4957 1.49569 0.26 0.6150
Remainder 15 85.0643 5.67095

Relative Efficiency, RCB 1.19

Means of Failures for Trt Fungicide treatments

Trt Mean
Control 10.800
Fung #1 6.200
Fung #2 8.200
Fung #3 6.600
Fung #4 5.800
Observations per Mean 5
Standard Error of a Mean 1.0402
Std Error (Diff of 2 Means) 1.4711

A table of treatment means, sample sizes, and standard errors of the means is
displayed at the bottom of the report. The standard error of the difference of two
means is reported when the sample sizes are equal.

Results Menu Once the analysis is computed and the AOV table is displayed, the Results menu
icon, which appears as a shovel, on the toolbar is enabled. Click on the icon to
display the results menu. Use the procedures on this menu to compute multiple
comparisons, linear contrasts, polynomial contrasts, means plots, residual plots,
and to save residuals. These options are discussed in detail on page 249.

Computational Oliver’s (1967) generalization of Yates’ algorithm (Daniel, 1976) is used for
Notes balanced designs. Unbalanced designs are computed using general linear models
(Searle, 1987; Glantz and Slinker, 1990).

Chapter 9, Analysis of Variance 223


Latin Square Design
This procedure computes the analysis of variance for Latin Square Designs. This
design simultaneously handles two known sources of variation, commonly referred
to as row-blocking and column-blocking.

Specification

Move the name of the variable containing the observed data to the Dependent
Variables box. If you specify more than one dependent variable, a separate
analysis is produced for each variable. Move the variable that identifies row-
blocking to the Row Variable box. Move the variable that identifies column-
blocking to the Column Variable box. Move the variable that identifies treatments
to the Treatment Variable box

Data Up to ten dependent variables can be specified. The number of treatments must be
Restrictions equal to the number of rows and columns. The row, column, and treatment
variables can be of any data type. Real values are truncated to whole numbers.
Strings are truncated to 20 characters. Missing values are allowed.

Example The example data are from a field trial to study the effect of row spacing on the
yield of millet (Snedecor and Cochran, 1980). The row and column blocks
represent fertility gradients in two directions. The data are listed on the next page,
and are stored in the file Sample Data\Millet.sx.

If you study the data, you’ll see that the five values for the treatment variable
Spacing appear exactly once for each combination of Row and Column.

224 Statistix 10 User’s Manual


Yield Row Column Spacing Yield Row Column Spacing
257 1 1 4 246 3 4 8
230 1 2 10 250 3 5 2
279 1 3 2 203 4 1 2
287 1 4 6 204 4 2 6
202 1 5 8 227 4 3 8
245 2 1 8 193 4 4 10
283 2 2 2 259 4 5 4
245 2 3 10 231 5 1 6
280 2 4 4 271 5 2 8
260 2 5 6 266 5 3 4
182 3 1 10 334 5 4 2
252 3 2 4 338 5 5 10
280 3 3 6

The model is specified in the dialog box on the preceding page. The results are
presented in the table below.

Latin Square AOV Table for Yield

Source DF SS MS F P
Row 4 13601.4 3400.34
Column 4 6146.2 1536.54
Spacing 4 4156.6 1039.14 0.98 0.4523
Error 12 12667.3 1055.61
Total 24 36571.4

Grand Mean 252.16


CV 12.88

Tukey's 1 Degree of Freedom Test for Nonadditivity


Source DF SS MS F P
Nonadditivity 1 142.9 142.89 0.13 0.7298
Remainder 11 12524.4 1138.58

Relative
Efficiency
Completely Randomized Design 1.45
Randomized Complete Block, Row 1.06
Randomized Complete Block, Column 1.40

Means of Yield for Spacing

Spacing Mean
2 269.80
4 262.80
6 252.40
8 238.20
10 237.60
Observations per Mean 5
Standard Error of a Mean 14.530
Std Error (Diff of 2 Means) 20.549

The analysis of variance table appears first in the report. An F test and the
associated p-value are listed for the treatment variable Spacing. The test for
treatment effect is not significant (p = 0.4523).

Tukey’s one degree of freedom test for nonadditivity is useful when the
experimental design only permits an additive model to be fitted to the data but you
suspect that interaction is present. There’s little suggestion of nonadditivity (p =
0.7298) in this example. If nonadditivity is present, you should consider
transforming your data in an effort to remove it. See Snedecor and Cochran (1980)
for computational details.

Chapter 9, Analysis of Variance 225


As with the RCB design, we’re interested in the efficiency of blocking. There are
three relative efficiencies reported, indicating the improved efficiencies of the
Latin square design compared to three alternative designs. Using a Latin square
design in this experiment resulted in a 45% improvement compared to the
completely randomized design, but only a 6% improvement over the RCB design
using rows as blocks. See Gomez and Gomez (1984) for computational details.

A table of treatment means, sample sizes, and standard errors of the means is
displayed at the bottom of the report. The standard error of the difference of two
means is reported when the sample sizes are equal.

Results Menu Once the analysis is computed and the AOV table is displayed, the Results menu
icon, which appears as a shovel (think “dig-deeper”), on the toolbar is enabled.
Click on the icon to display the results menu. Use the procedures on this menu to
compute multiple comparisons, linear contrasts, polynomial contrasts, means plots,
residual plots, and to save residuals. These options are discussed on page 249.

Balanced Lattice Design


The Balanced Lattice Design is an incomplete block design that’s useful when the
number of treatments is large, which can make a RCB design impractical.
Individual blocks don’t contain all treatments. This design requires that the
number of treatments is a perfect square, the block size is the square root of the
number of treatments, and the number of replications is one more than the block
size. Unlike the other AOV procedures discussed in this chapter, this procedure
doesn’t allow missing values.

Specification

Move the name of the variable containing the observed data to the Dependent

226 Statistix 10 User’s Manual


Variables box. If you specify more than one dependent variable, a separate
analysis is produced for each variable. Move the variables that identify replicates,
blocks, and treatments to the Replication Variable, Block Variable, and
Treatment Variable boxes respectively. Press the OK button to start the analysis.

Data Up to ten dependent variables can be specified. The maximum number of


Restrictions treatments allowed is 196. The replication, block, and treatment variables can be
of any data type. Real values are truncated to whole numbers. Strings are
truncated to 20 characters. Missing values are not allowed.

Example The example data are from Gomez and Gomez (1984, p. 45). Tiller number per
square meter is recorded from 16 fertilizer treatments of rice in a 4 × 4 balanced
lattice design. The data for the first of five replicates is listed below. The
complete data are stored in the file Sample Data\Tiller.sx.

Case Y Rep Blk Fert


1 147 1 1 1
2 152 1 1 2
3 167 1 1 3
4 150 1 1 4
5 127 1 2 5
6 155 1 2 6
7 162 1 2 7
8 172 1 2 8
9 147 1 3 9
10 100 1 3 10
11 192 1 3 11
12 177 1 3 12
13 155 1 4 13
14 195 1 4 14
15 192 1 4 15
16 205 1 4 16
. . .
80 220 5 4 14

The model is specified in the dialog box on the preceding page. The results are
presented on the next page.

The analysis of variance table lists the sums of squares computed in the usual
manner. The mean square for the treatment factor is adjusted to account for an
unequal block effect, if one exists, since not all treatments are represented in each
block. The F test for the treatment factor is computed using the adjusted mean
square for treatment and the effective error. The test for treatment effect is
significant ( p = 0.0001).

The relative efficiency of the balanced lattice design, compared to the RCB design,
for these data is 1.17, indicating a 17% improvement in efficiency.

Chapter 9, Analysis of Variance 227


Balanced Lattice AOV for Y

Source DF SS MS F P
Rep 4 5946.0
Fert(unadj) 15 26994.3
Blk*Rep 15 11381.8 758.79
Intrablock error 45 14533.3 322.96
Fert(adj) (15) 1600.12 4.33 0.0001
Effective error (45) 369.34
Total 79 58855.5

Grand Mean 171.83


CV 11.18
Relative efficiency, RCB 1.17

Means of Y for Fert

Fert Mean Fert Mean


1 165.76 9 163.00
2 161.04 10 118.82
3 183.92 11 188.19
4 175.68 12 190.54
5 162.88 13 169.51
6 173.82 14 197.23
7 168.43 15 185.67
8 176.92 16 167.78
Observations per Mean 5
Standard Error of a Mean 8.5946
Std Error (Diff of 2 Means) 12.155

A table of treatment means is displayed at the bottom of the report. Like the
treatment mean square, the treatment means are adjusted to account for an unequal
block effect, if one exists, since not all treatments are represented in each block.

Results Menu Once the analysis is computed and the AOV table is displayed, the Results menu
icon, which appears as a shovel (think “dig-deeper”), on the toolbar is enabled.
Click on the icon to display the results menu. Use the procedures on this menu to
compute multiple comparisons, linear contrasts, polynomial contrasts, means plots,
residual plots, and to save residuals. These options are discussed in detail on page
249.

Computational The analysis of variance, relative efficiency, and adjustments to the treatment
Notes means are computed using the algorithms described by Gomez and Gomez (1984).

228 Statistix 10 User’s Manual


Factorial Design
The Factorial Design procedure computes the analysis of variance for complete
factorial designs and fractional factorial designs. It can handle factorial
experiments in a completely randomized design, randomized block design without
replication, and a randomized block design with replication.

Specification Move the name of the variable containing the observed data to the Dependent
Variables box. If you specify more than one dependent variable, a separate
analysis is produced for each variable.

The Replication Variable and Block Variable are both optional. If the experiment
was performed in a completely randomized design without replication, then leave
both boxes empty. If the experiment was performed in a randomized block design
without additional replication, then leave the Replication Variable box empty and
move the blocking variable to the Block Variable box. If the experiment is in a
randomized block design with replication, then use both boxes to specify
replication and blocking variables. You can use the General AOV/AOCV
procedure to analyze factorial experiments in other designs, such as a Latin square
or split-plot design.

Move the treatment factor variables to the Treatment Variables box. The
Interaction Terms drop-down list lets you select the highest order of interaction
terms to be included in the model: no interactions, up to 2-way interactions, up to
3-way interactions, etc. Press the OK button to start.

Data Up to ten dependent variables can be specified. The total number of factors
Restrictions (replication, block, and treatment variables) selected can’t exceed ten. The
maximum number of levels for each factor is 500. The factor variables can be of

Chapter 9, Analysis of Variance 229


any data type. Real values are truncated to whole numbers. Strings are truncated
to 20 characters. Missing values are allowed. For unbalanced and fractional
designs, the maximum size of the GLM design matrix is 1000. (The size of the
design matrix is equivalent to the model degrees of freedom: total degrees of
freedom minus error degrees of freedom.)

Example The example data are from a rice yield trial in a fractional factorial design (Gomez
and Gomez, 1984, p. 172). Fractional factorial designs are useful when the
number of factors of interest is so large that it would be too expensive or too
impractical to include the complete set of factorial treatments. The example data
are from a 26 factorial experiment in a ½ fractional design with two blocks of 16
experimental plots each, and with two replications. A partial listing of the data are
presented below. The complete data are available in the file
Sample Data\Fractional.sx.

Case Yield Rep Blk A B C D E F


1 2.92 1 1 0 0 0 0 0 0
2 3.28 1 1 0 0 0 0 1 1
3 3.34 1 1 0 0 0 1 0 1
4 3.29 1 1 0 0 0 1 1 0
5 3.16 1 1 0 1 1 0 0 0
6 3.63 1 1 0 1 1 0 1 1
7 4.00 1 1 0 1 1 1 0 1
8 4.04 1 1 0 1 1 1 1 0
9 3.65 1 1 1 0 1 0 0 0
10 3.77 1 1 1 0 1 0 1 1
11 4.37 1 1 1 0 1 1 0 1
12 4.05 1 1 1 0 1 1 1 0
13 3.45 1 1 1 1 0 0 0 0
14 3.85 1 1 1 1 0 0 1 1
15 3.95 1 1 1 1 0 1 0 1
16 3.88 1 1 1 1 0 1 1 0
. . .
64 4.78 2 2 1 1 1 1 1 1

Since a fractional factorial doesn’t include all treatment combinations, it isn’t


possible to estimate all main effects and interactions. These experiments must be
designed carefully so that all the main effects, and all or most of the low-order
interactions terms can be estimated. The remaining terms assumed to have zero
effects.

The analysis is specified on the preceding page. The results are presented on the
next page.

Since up to 3-way interactions was selected in the dialog box, only main effects, 2-
way interactions, and 3-way interactions are included in the AOV table. The
higher-order interaction effects are assumed to be zero and the sums of squares are
pooled to provide the error sums of squares.

Note that not all the possible 2-way and 3-way interactions are listed. This is a
feature of fractional factorial designs. The omitted interactions are aliased with

230 Statistix 10 User’s Manual


terms that are listed in the table. Statistix automatically selects which aliased terms
to display, placing a higher priority on low-order terms, and selecting aliased terms
of the same order in lexical order. If you’d like to chose different aliased terms,
then you must use the General AOV/AOCV and list each term you want included.

Analysis of Variance Table for Yield

Source DF SS MS F P
Rep 1 0.05641 0.05641
Blk 1 3.906E-03 0.00391
Rep*Blk 1 3.906E-03 0.00391
A 1 3.00156 3.00156 324.51 0.0000
B 1 0.57760 0.57760 62.45 0.0000
C 1 2.00222 2.00222 216.47 0.0000
D 1 3.20410 3.20410 346.40 0.0000
E 1 0.50410 0.50410 54.50 0.0000
F 1 1.76226 1.76226 190.52 0.0000
A*B 1 0.03422 0.03422 3.70 0.0639
A*C 1 0.01323 0.01323 1.43 0.2412
A*D 1 1.600E-03 0.00160 0.17 0.6804
A*E 1 1.000E-04 0.00010 0.01 0.9179
A*F 1 0.04101 0.04101 4.43 0.0437
B*C 1 0.03516 0.03516 3.80 0.0606
B*D 1 0.04101 0.04101 4.43 0.0437
B*E 1 0.01381 0.01381 1.49 0.2313
B*F 1 4.225E-03 0.00423 0.46 0.5043
C*D 1 0.35701 0.35701 38.60 0.0000
C*E 1 0.01156 0.01156 1.25 0.2725
C*F 1 3.025E-03 0.00302 0.33 0.5717
D*E 1 0.13876 0.13876 15.00 0.0005
D*F 1 0.04000 0.04000 4.32 0.0462
E*F 1 0.05290 0.05290 5.72 0.0233
A*B*D 1 4.556E-03 0.00456 0.49 0.4882
A*B*E 1 3.906E-03 0.00391 0.42 0.5207
A*B*F 1 0.02403 0.02403 2.60 0.1175
A*C*D 1 0.09151 0.09151 9.89 0.0037
A*C*E 1 0.01756 0.01756 1.90 0.1785
A*C*F 1 9.000E-04 0.00090 0.10 0.7572
A*D*E 1 0.04951 0.04951 5.35 0.0277
A*D*F 1 0.04623 0.04623 5.00 0.0330
A*E*F 1 2.500E-05 0.00002 0.00 0.9589
Error 30 0.27749 0.00925
Total 63 12.4193

Grand Mean 3.6191


CV 2.66

Results Menu Once the analysis is computed and the AOV table is displayed, the Results menu
icon, which appears as a shovel (think “dig-deeper”), on the toolbar is enabled.
Click on the icon to display the results menu. Use the procedures on this menu to
compute means, multiple comparisons, linear contrasts, polynomial contrasts,
means plots, residual plots, and to save residuals. These options are discussed in
detail on page 249.

Chapter 9, Analysis of Variance 231


Split-Plot Design
The split-plot design is a two-factor design suitable for experiments that have more
treatments than can be accommodated in a complete block design. One of the
factors is assigned to the main plot. The main plot is divided into subplots to
which the second factor is assigned. The precision of the effects of the main-plot
factor is sacrificed to improve that of the subplot factor

Specification

Move the name of the variable containing the observed data to the Dependent
Variables box. If you specify more than one dependent variable, a separate
analysis is produced for each variable.

Move the blocking, or replication, variable to the Replication Variable box. Move
the variable name of the main-plot factor to the Main-plot Factor box. Move the
variable name of the subplot factor to the Subplot Factor box.

Data Up to ten dependent variables can be specified. The maximum number of levels
Restrictions for the replication, main plot, and subplot factors are 500 each. The factor
variables can be of any data type. Real values are truncated to whole numbers.
Strings are truncated to 20 characters. For unbalanced designs, the maximum size
of the GLM design matrix is 1000. (The size of the design matrix is equivalent to
the model degrees of freedom: total DF minus error DF).

Example The example is a split-plot design from Section 16.15 of Snedecor and Cochran
(1980). Tons is the yield of alfalfa in tons per acre. Blk identifies the six blocks
used. Variety is the variety of alfalfa. Date is the time in days between the second
and third cuttings. A partial listing of the data are presented below. The complete
data are available in the file Sample Data\Alfalfa.sx.

232 Statistix 10 User’s Manual


Case Tons Variety Blk Date
1 2.17 1 1 106
2 1.58 1 1 35
3 2.29 1 1 54
4 2.23 1 1 71
5 1.88 1 2 106
6 1.26 1 2 35
7 1.60 1 2 54
8 2.01 1 2 71
9 1.62 1 3 106
10 1.22 1 3 35
11 1.67 1 3 54
12 1.82 1 3 71
. . .
72 1.33 3 6 71

The second cutting was on July 27. The third cuttings were on September 1,
September 20, and October 7. One treatment wasn’t cut a third time. We assigned
this group the date November 10, intending to reflect the end of the growing
season. The values for Date are 36, 55, 72, and 106. Variety and Blk are
qualitative factors, and the actual values of them have meaning only as labels.
When possible, factors, such as Date, should be represented quantitatively because
response surfaces can then be examined with polynomial contrasts (see Polynomial
Contrasts on page 264).

The analysis is specified in the dialog box on the preceding page. The results are
shown below.

Analysis of Variance Table for Tons Tons of alfalfa

Source DF SS MS F P
Blk 5 4.14982 0.82996
Variety 2 0.17802 0.08901 0.65 0.5412
Error Blk*Variety 10 1.36235 0.13623
Date 3 1.96247 0.65416 23.39 0.0000
Variety*Date 6 0.21056 0.03509 1.25 0.2973
Error Blk*Variety*Date 45 1.25855 0.02797
Total 71 9.12177

Grand Mean 1.5968


CV(Blk*Variety) 23.11
CV(Blk*Variety*Date) 10.47

The split-plot design has two error terms. These are labeled “Error Blk(Variety”
and “Error Blk(Variety(Date” in the AOV table above. The F test for the
main–plot factor Variety is computed using the first error term. The subplot factor
Date and the Variety(Date interaction are tested using the second error term. The
p-value of 0.5412 suggests little difference between varieties. The Date effect
appears to be very significant.

There are two coefficients of variation listed, one for each error term. The first
coefficient indicates the degree of precision for the main-plot factor. The second
coefficient indicates the precision for the subplot factor and the Variety(Date
interaction.

Chapter 9, Analysis of Variance 233


Results Menu Once the analysis is computed and the AOV table is displayed, the Results menu
icon, which appears as a shovel (think “dig-deeper”), on the toolbar is enabled.
Click on the icon to display the results menu. Use the procedures on this menu to
compute means, multiple comparisons, linear contrasts, polynomial contrasts,
means plots, residual plots, and to save residuals. These options are discussed in
detail on page 249.

Computational Oliver’s (1967) generalization of Yates’ algorithm (Daniel, 1976) is used for
Notes balanced designs. Unbalanced designs are computed using general linear models
(Searle, 1987; Glantz and Slinker, 1990).

Strip-Plot Design
The strip-plot design is a two-factor design that’s useful when the desired precision
for the two-factor interaction is greater than that of either main effect. The design
calls for a horizontal-strip factor, a vertical-strip factor, and an intersection plot for
the interaction of the two factors.

Specification

Move the name of the variable containing the observed data to the Dependent
Variables box. If you specify more than one dependent variable, a separate
analysis is produced for each variable. Specify the model by moving the variables
for replication, the horizontal-strip factor, and the vertical-strip factor to the
corresponding Replication Variable, Horizontal Factor, and Vertical Factor
boxes. Press the OK button to start the analysis.

Data Up to ten dependent variables can be specified. The maximum number of levels
Restrictions for the replication, horizontal, and vertical factors are 500 each. The factor
variables can be of any data type. Real values are truncated to whole numbers.

234 Statistix 10 User’s Manual


Strings are truncated to 20 characters. For unbalanced designs, the maximum size
of the GLM design matrix is 1000.

Example The example data are from a yield trial of six varieties of rice and three levels of
nitrogen fertilizer in a strip-plot design with three replications (Gomez and Gomez,
1984). A partial listing of the data are presented below. The complete data can be
found in Sample Data\Strip-plot.sx.

Case Yield Rep Variety Nitrogen


1 2373 1 IR8 0
2 4076 1 IR8 60
3 7254 1 IR8 120
4 4007 1 IR127-80 0
5 5630 1 IR127-80 60
6 7053 1 IR127-80 120
7 2620 1 IR305-4-12 0
8 4676 1 IR305-4-12 60
9 7666 1 IR305-4-12 120
10 2726 1 IR400-2-5 0
11 4838 1 IR400-2-5 60
12 6881 1 IR400-2-5 120
. . .
54 3214 3 Peta 120

The analysis is specified in the dialog box on the preceding page. The results are
shown below.

Analysis of Variance Table for Yield Grain yield

Source DF SS MS F P
Rep 2 9220962 4610481
Variety 5 5.710E+07 1.142E+07 7.65 0.0034
Error Rep*Variety 10 1.492E+07 1492262
Nitrogen 2 5.068E+07 2.534E+07 34.07 0.0031
Error Rep*Nitrogen 4 2974908 743727
Variety*Nitrogen 10 2.388E+07 2387798 5.80 0.0004
Error Rep*Variety*Nitrogen 20 8232917 411646
Total 53 1.670E+08

Grand Mean 5289.9


CV(Rep*Variety) 23.09
CV(Rep*Nitrogen) 16.30
CV(Rep*Variety*Nitrogen) 12.13

The strip-plot design has three error terms. The F test for the horizontal-strip
factor Variety is computed using the first error term Rep(Variety. The vertical-
strip factor Nitrogen is tested using the error term Rep(Nitrogen. The interaction
Variety(Nitrogen is tested using the Rep(Variety(Nitrogen error term.

Results Menu Once the analysis is computed and the AOV table is displayed, the Results menu
icon, which appears as a shovel (think “dig-deeper”), on the toolbar is enabled.
Click on the icon to display the results menu. Use the procedures on this menu to
compute means, multiple comparisons, linear contrasts, polynomial contrasts,
means plots, residual plots, and to save residuals. These options are discussed in
detail on page 249.

Chapter 9, Analysis of Variance 235


Split-Split-Plot Design
The split-split-plot design is an extension of the split-plot design to accommodate
a third factor. There are three plots sizes: the main plot, the subplot, and the sub-
subplot. There are three levels of precision: the main-plot factor has the lowest,
and the sub-subplot factor having the highest degree of precision.

Example

The example data are from a yield trial of rice in a split-split-plot design with three
replications (Gomez and Gomez, 1984, p. 143). The main-plot factor is nitrogen
(N), the subplot factor is management practice (Mgmt), and the sub-subplot factor
is variety (Var). You can view the data by opening the file Sample Data\Split-
split-plot.sx. The model is specified in the dialog box above. The results are
shown below.

Analysis of Variance Table for Yield

Source DF SS MS F P
Rep 2 0.732 0.366
N 4 61.641 15.410 27.70 0.0001
Error Rep*N 8 4.451 0.556
Mgmt 2 42.936 21.468 82.00 0.0000
N*Mgmt 8 1.103 0.138 0.53 0.8226
Error Rep*N*Mgmt 20 5.236 0.262
Var 2 206.013 103.007 207.87 0.0000
N*Var 8 14.145 1.768 3.57 0.0019
Mgmt*Var 4 3.852 0.963 1.94 0.1149
N*Mgmt*Var 16 3.699 0.231 0.47 0.9538
Error Rep*N*Mgmt*Var 60 29.732 0.496
Total 134 373.541

Grand Mean 6.5544


CV(Rep*N) 11.38
CV(Rep*N*Mgmt) 7.81
CV(Rep*N*Mgmt*Var) 10.74

236 Statistix 10 User’s Manual


Note the three error terms. The terms that use the mean square of each error term
as the denominator for the F test are listed immediately above them in the table.

Computational Oliver’s (1967) generalization of Yates’ algorithm (Daniel, 1976) is used for
Notes balanced designs. Unbalanced designs are computed using general linear models
(Searle, 1987; Glantz and Slinker, 1990).

Strip-Split-Plot Design
The strip-split-plot design is an extension of the strip-plot design to accommodate
a third factor. The intersection plot of the strip-plot design is divided into subplots
for the third factor. There are four plots sizes: the horizontal strip, the vertical
strip, the intersection plot, and the subplot. There are four levels of precision with
the subplot factor have the highest degree of precision.

Example The example data are from a yield trial of rice (Gomez and Gomez, 1984, p.155).
The treatment factors are nitrogen, variety, and planting method (broadcast vs.
transplanted). You can view the data by opening the file Sample Data\Strip-split-
plot.sx.

The results are displayed on the next page.

Note that there are four error terms. The terms that use the mean square of each
error term as the denominator for the F test are listed immediately above them in
the table.

Chapter 9, Analysis of Variance 237


Analysis of Variance Table for Yield

Source DF SS MS F P
Rep (A) 2 1.531E+07 7653156
Nitrogen (B) 2 1.165E+08 5.827E+07 36.65 0.0027
Error A*B 4 6359988 1589997
Variety (C) 5 4.909E+07 9818698 3.67 0.0380
Error A*C 10 2.673E+07 2672583
B*C 10 2.461E+07 2461442 2.58 0.0344
Error A*B*C 20 1.911E+07 955675
Planting (D) 1 726028 726028 1.72 0.1982
B*D 2 2467935 1233968 2.92 0.0668
C*D 5 2.376E+07 4751761 11.25 0.0000
B*C*D 10 7513641 751364 1.78 0.1007
Error A*B*C*D 36 1.521E+07 422560
Total 107 3.074E+08

Grand Mean 5371.6


CV(Rep*Nitrogen) 23.47
CV(Rep*Variety) 30.43
CV(Rep*Nitrogen*Variety) 18.20
CV(Rep*Nitrogen*Variety*Planting) 12.10

Computational Oliver’s (1967) generalization of Yates’ algorithm (Daniel, 1976) is used for
Notes balanced designs. Unbalanced designs are computed using general linear models
(Searle, 1987; Glantz and Slinker, 1990).

Repeated Measures Design


For the analysis of variance designs discussed earlier in the chapter, an individual
experimental unit, whether it be a test animal or a plot of land, is assigned to a
single treatment and the response variable is measured only once. For a repeated
measures analysis of variance, an individual experimental unit, or subject, is
observed under several different levels of one or more experimental factors. This
procedure computes the analysis of variance for a variety of repeated measures
designs.

Specification In order to specify a repeated measures design, it’s important to understand the
distinction between a between-subject factor and a within-subject factor. A
between-subject factor is an experimental treatment whose effect is estimated by
observing differences between subjects. The pool of available subjects are divided
into one group for each level of a between-subjects factor, so an individual subject
has only one level of the treatment applied to it. A within-subject factor is one
whose effect is estimated by observing differences within subjects. All subjects
have each level of the treatment applied to them.

A repeated measures design doesn’t require a between-subjects factor, but it must


have at least one within-subjects factor. The simplest repeated measures design is

238 Statistix 10 User’s Manual


the one-way repeated measures analysis of variance. In this design, each subject is
observed over the various levels of a single experimental factor (see example I
below).

Move the name of the variable containing the observed data to the Dependent
Variables box. If you specify more than one dependent variable, a separate
analysis is produced for each variable.

If your model includes any between-subject factors, move the variables for those
factors to the Between-Subject Factors box. Move the variable that identifies
subjects to the Subject Factor box. Move the variables that identify within-subject
factors to the Within-Subject Factor box. Press the OK button to start the
analysis.

Data Up to ten dependent variables can be specified. Up to three between-subject


Restrictions factors and three within-subject factors can be specified. All factors are limited to
500 levels each. The factor variables can be of any data type. The number of
subjects per level of between-subject factors may vary. If one of the repeated
measures is missing for a subject, then the entire subject is deleted from the
analysis.

Example I - The example data are from a study to examine the effects of two stages of
One-way RM digestion on the metabolism of dogs (Glantz and Slinker, 1990, p. 392). The first
AOV stage is the act of eating the food (smelling, chewing, and tasting). The second
stage is the digestion that occurs in the stomach. The metabolic rate was observed
after meals were provided to six dogs in three different manners: (1) eating
normally, (2) placing the food in the mouth but bypassing the stomach, and (3)
bypassing the mouth by placing the food directly into the stomach. The data are
shown in the table below, and are stored in the file Sample Data\Eating.sx.

Chapter 9, Analysis of Variance 239


Case Metab Dog Mode Case Metab Dog Mode
1 104 1 Normal 10 114 4 Normal
2 91 1 Mouth 11 106 4 Mouth
3 22 1 Stomach 12 15 4 Stomach
4 106 2 Normal 13 117 5 Normal
5 94 2 Mouth 14 120 5 Mouth
6 14 2 Stomach 15 18 5 Stomach
7 111 3 Normal 16 139 6 Normal
8 105 3 Mouth 17 111 6 Mouth
9 14 3 Stomach 18 8 6 Stomach

The model is specified in the dialog box on the preceding page. The results are
presented below.

Repeated Measures AOV Table for Metab

Source DF SS MS F P
Dog 5 572.9 114.6
Mode 2 36188.4 18094.2 198.35 0.0000
Error 10 912.2 91.2
Total 17 37673.6

Grand Mean 78.278


CV 12.20

Greenhouse-Geisser Corrected P-Values for Nonsphericity


Greenhouse Huynh
Minimum Geisser Feldt
Epsilon Epsilon Epsilon
Source F P P P
Mode 198.35 0.0000 0.0000 0.0000

Sphericity Assumption Tests


Greenhouse Huynh
Minimum Geisser Feldt Mauchly's
Source Epsilon Epsilon Epsilon Statistic Chi Sq DF P
Dog*Mode 0.5000 0.7785 1.0000 0.71543 1.34 2 0.5118

The standard AOV table is displayed first. For designs with three or more repeated
measures, we must consider whether the assumption of sphericity has been met,
and the implications if it has not. Sphericity is an important assumption of a
repeated measures AOV. It refers to the condition where the variances of the
differences between all possible pairs of the within-subject factor are equal. If
sphericity is violated, the resulting F test for the within-subjects factor is inflated
leading to a test that is too liberal (detecting differences when none exist).
Mauchly’s statistic and the associated Chi-squared value test the assumption of
sphericity. The p-value for this example is 0.5118, so we don’t reject the null
hypothesis of sphericity. We’d reject the null hypothesis if the p-value is small
(say, p < 0.05).

When sphericity is suspect, we can use a corrected F test where the degrees of
freedom are altered to produce a test that reduces the type I error rate. Statistix
provides three corrected tests, each based on a statistic called epsilon, which is a
measure of the departure from sphericity. A value of 1 for epsilon indicates
sphericity. The smaller epsilon, the greater the departure from sphericity.

The most conservative corrected F test is called the Minimum Epsilon test. The
Greenhouse-Geisser test is less conservative, and the Huynh-Feldt test is the least

240 Statistix 10 User’s Manual


conservative. Statistix reports the values for epsilon for each of these tests, and the
p-values for the corrected F tests. It’s recommended that you take the corrected
tests into consideration, even when Mauchly’s test is nonsignificant.

The F test for the effect of eating modes in the AOV table above is highly
significant for the basic F test, and for all of the corrected tests. Placing food
directly into the stomachs of the dogs resulted in a lower metabolic rate compared
to the other two modes of eating.

Example II - The following example is a two-way repeated measures with one between-subject
Two-way RM factor and one-within subject factor. It compares the effects of alcohol on people
AOV diagnosed with antisocial personality disorder (ASP) and those that don’t have
ASP (Glantz and Slinker, 1990, p. 410). Personality type (ASP and non-ASP) is a
between-subject factor (a subject has either one personality type or the other).
Subjects are given alcohol to drink, and the aggressiveness is evaluated before and
after drinking. Drinking status (sober and drinking) is a within-subject factor. The
data are listed below, and are stored the file Sample Data\Alcohol.sx.

Aggress P_Type Subject Drink Aggress P_Type Subject Drink


0.81 Non-ASP 1 Sober 0.72 ASP 9 Sober
0.59 Non-ASP 1 Drinking 0.83 ASP 9 Drinking
0.91 Non-ASP 2 Sober 0.82 ASP 10 Sober
1.04 Non-ASP 2 Drinking 0.99 ASP 10 Drinking
0.98 Non-ASP 3 Sober 0.89 ASP 11 Sober
1.11 Non-ASP 3 Drinking 1.17 ASP 11 Drinking
1.08 Non-ASP 4 Sober 1.01 ASP 12 Sober
1.13 Non-ASP 4 Drinking 1.24 ASP 12 Drinking
1.10 Non-ASP 5 Sober 1.10 ASP 13 Sober
1.15 Non-ASP 5 Drinking 1.33 ASP 13 Drinking
1.16 Non-ASP 6 Sober 1.14 ASP 14 Sober
1.16 Non-ASP 6 Drinking 1.47 ASP 14 Drinking
1.19 Non-ASP 7 Sober 1.24 ASP 15 Sober
1.25 Non-ASP 7 Drinking 1.59 ASP 15 Drinking
1.44 Non-ASP 8 Sober 1.34 ASP 16 Sober
1.70 Non-ASP 8 Drinking 1.73 ASP 16 Drinking

The model is specified below.

Chapter 9, Analysis of Variance 241


The results are displayed below.

Repeated Measures AOV Table for Aggress Aggressiveness

Source DF SS MS F P
P_Type 1 0.02050 0.02050 0.16 0.6921
Error P_Type*Subject 14 1.75589 0.12542
Drink 1 0.20320 0.20320 29.23 0.0001
P_Type*Drink 1 0.08303 0.08303 11.94 0.0039
Error P_Type*Subject*Drink 14 0.09732 0.00695
Total 31 2.15995

Grand Mean 1.1378


CV(P_Type*Subject) 31.13
CV(P_Type*Subject*Drink) 7.33

Note that there are two error terms listed in the AOV table. Personality type is
tested using the subject within personality type term P_Type(Subject. Drink and
the two-factor interaction are tested using the P_Type(Subject(Drink term.

The test for the P_Type(Drink interaction is significant. Examining this


interaction using the Means Plot available on the results menu clearly illustrates
that while drinking increases aggressiveness, the increase is much more dramatic
for the ASP personality type.

Box’s Test for We discussed the assumption of sphericity of the covariance matrix for the one-
Equality of way repeated measures design on page 240. Repeated measures designs with one
Covariance or more between-subjects factors require an additional assumption before the F
Matrices tests for within-subject terms are considered valid. Each between-subjects group
has a covariance matrix for the repeated measures associated with it. These
covariance matrices are assumed to be homogeneous, and that the pooled
covariance matrix is spherical. Statistix reports Box’s test for equality of
covariance matrices for such designs.

The report for the drinking example on the preceding page doesn’t include Box’s
test because there were only two levels for the repeated measures factor Drink.
We’ll consider an example from Winer et al. (1991) that has a between-subject
factor with two levels, and a within-subjects factor with three levels. The data are
available in the file Sample Data\Winer Table 7.5.sx. Box’s test appears at the end
of the report, and is shown below.

Box's Test for Equality of Covariance Matrices


Source Box's M F DF1 DF2 P(F) Chi Sq DF P(Chi Sq)
A*S*B 7.59 0.73 6 464 0.6221 4.50 6 0.6087

The test statistic is Box’s M. Two significance tests are reported, one based on the
F distribution, and one on the Chi-squared distribution. We reject the null
hypothesis that the covariance matrices are homogeneous when the p-value is
small (say, p < 0.05). The p-values for the example are large, so we conclude that
the assumption is valid. For instances where the assumption is not valid, you can
try a power transformation (Ln, Exp, or Sqrt) of the dependent variable.

242 Statistix 10 User’s Manual


Results Menu Once the analysis is computed and the AOV table is displayed, the Results menu
icon, which appears as a shovel (think “dig-deeper”), on the toolbar is enabled.
Click on it to display the results menu. Use the procedures on this menu to
compute means, multiple comparisons, contrasts, plots, and to save residuals. An
example of the Covariance Matrix procedure is displayed on the next page. The
remaining options are discussed in detail on page 249.

Covariance The results menu for repeated measures AOV includes an option to display the
Matrix covariance matrices, which doesn’t appear on the menu for other AOV designs.
The covariance matrix for example I (page 239) is shown below.

Covariance Matrix for Metab by Mode

Mode Normal Mouth Stomach


Normal 159.767
Mouth 85.3000 115.500
Stomach -45.2333 -16.7000 21.7667

Average correlation (r) 0.0787

Missing The number of subjects per level of between-subject factors may vary. If one of
Values in the repeated measures is missing for a subject, then the entire subject is deleted
Repeated from the analysis.
Measures

Computational Oliver’s (1967) generalization of Yates’ algorithm (Daniel, 1976) is used for
Notes balanced designs. Designs with unequal number of subject per between-subject
groups are computed using general linear models (Searle, 1987; Glantz and
Slinker, 1990). See Winer et al. (1991) for details about covariance matrices,
Mauchly’s test, epsilon, Greenhouse-Geisser test, Huynh-Feldt test, and Box’s test.

General AOV/AOCV
The General AOV/AOCV procedure is a flexible procedure you can use to
analyze many analysis of variance and covariance designs, including ones that are
unbalanced. Models are specified by explicitly entering a model statement. Many
options are available—mean estimation, multiple comparisons of means, general
and polynomial contrasts, residual plots, and least squares estimation of missing
values.

Specification The dependent variable contains the observed data. Select the dependent variable

Chapter 9, Analysis of Variance 243


from the Variables list box and move it to the Dependent Variables box. If you
specify more than one dependent variable, a separate analysis is produced for each
variable.

Models are specified in a manner similar to the usual algebraic expression of


analysis of variance models, such as those illustrated in Snedecor and Cochran
(1980). Use the factor variables to list the terms in your model in the AOV Model
Statement box. The terms are main effects, which are entered as a single variable,
and interaction terms, which are listed as a group of factor variables combined
using stars (e.g., TRT(BLK). The high order interaction term is assumed to be an
error term and can often be omitted from the model statement. Other interaction
terms can be indicated as an error term by typing (Error) or (E) after the term (see
examples 5 and 6 on page 245).

Typically, the AOV Model Statement is simply typed in manually. You can also
copy variables from the Variables list box to the current cursor position of the
AOV Model Statement by first highlighting one or more variables in the Variables
list, then pressing the right-arrow button next to the AOV Model Statement. You
can also drag a variable from the Variables box to any position in the AOV Model
Statement box.

For analysis of covariance, select the names of the variables you want to use as
covariates and move them to the Covariables list box.

Example Model specification (list of main effects and interaction terms) is very flexible and
Model best illustrated by example.
Statements
Example 1: Completely randomized design, also called the one-way design (see
page 219 for a discussion of the CRD). If the treatment factor is A, the model is
specified simply as:
A

244 Statistix 10 User’s Manual


Example 2: Randomized complete block design (see page 221 for a discussion of
the RCB design). Blk is the factor for blocks and A is the treatment factor.
Blk A

Example 3: Single-factor Latin square design (see page 224 for a discussion of the
Latin square design). The variables Row and Col are the row- and column blocking
factors, and A is the treatment factor.
Row Col A

Example 4: Three-factor factorial in a completely randomized design with all two


factor interactions (see page 229 for a discussion of factorial designs).
A B C A*B A*C B*C

Note: The example factorial design above could be entered more concisely using
the ALL2 keyword discussed on page 246.
ALL2(A B C)

Example 5: Split-plot design (see page 232 for a discussion of the split-plot
design). The variable Rep is the factor for replication, A is the main-plot factor,
and B is the subplot factor.
Rep A Rep*A(E) B A*B

Note the interaction Rep(A is an error term. The three factor interaction term
Rep(A(B is also an error term, but was omitted above because Statistix always
adds the high order interaction term automatically.

Example 6: Strip-plot design (see page 234 for a discussion of the strip-plot
design). The variable Rep is the factor for replication, A is the main-plot factor,
and B is the subplot factor.
Rep A Rep*A(E) B Rep*B(E) A*B

Example 7: One-way repeated measures design (see page 238 for a discussion of
repeated measures designs). Subj is the subjects factor and A is the within-
subjects factor.
Subj A

Examples 1 - 7 above are all models that could be more easily specified using the
specific AOV procedures discussed earlier in this chapter. But the General
AOV/AOCV procedure allows you to specify other models, including variations of
the above models.

Example 8: A two-factor nested model with factor B nested within factor A.


A B*A

Compare the nested model with the two-factor cross-classified (factorial) model:
A B A*B

Example 9: A three-factor factorial experiment in a split-plot design. The factors


A and B are both main-plot factors, and C is the subplot factor.
REP A B A*B REP*A*B(E) C A*C B*C A*B*C

Chapter 9, Analysis of Variance 245


In most situations, neither the order in which terms are specified in the model nor
the order in which variables are specified within terms has any influence on the
analysis. The only exception occurs in certain models with multiple error terms,
which are described next.

Error Term The (Error) and (E) modifiers behind a term indicates that the term is to be used as
Specification an error term for computing F tests. Note that multiple error terms can be
specified, as in Examples 5, 6, 8, and 9 above. If the highest order interaction is
not explicitly listed in the model, it’s automatically added and is an error term. For
instance, in Example 5, the results would be exactly the same if the model had
been specified as:
Rep A Rep*A(E) B A*B Rep*A*B(E)

When multiple error terms are specified, an F test for a main effect or interaction is
based on the lowest order error term that includes the main effect or interaction
being tested. The lines of an analysis of variance table are organized so that an
error term appears directly below the group of terms that use it for the F tests. See
the sections for the split-plot design and repeated measures design in this chapter
for examples of analysis of variance tables for models with multiple error terms.

Occasionally, there will be error terms of equal order that contain the term for
which a test is desired. For example, consider the model:
X1 X2 X1(X2(ERROR) X3 X2(X3(ERROR) X1(X3 X1(X2(X3(ERROR)

The second order error terms X1(X2 and X2(X3 both contain X2. Which term
will be used to construct the F test for X2? Statistix scans the model from left to
right. When it encounters the first factor (X1), it assigns it the name “A”. The
next factor encountered (X2) is assigned the name “B”, and so on. The error terms
X1(X2 and X2(X3 are represented as AB and BC, respectively. To decide which
term to use, Statistix always uses the one with the lowest dictionary order, AB in
this case.

ALL Term The ALL modifier is provided to simplify model specifications when you want all
Specification subset interactions. For example, the model
A ALL (B C D)

is equivalent to
A B C D B*C B*D C*D B*C*D

You can also use the ALL2 modifier to specify all terms up to and including 2-
factor interaction terms. So the model statement
ALL2 (A B C)

is equivalent to
A B C A*B A*C B*C

The modifiers ALL3, ALL4, and ALL5 are defined in a similar manner.

246 Statistix 10 User’s Manual


Data You can specify up to ten dependent variables. Up to ten factors (control and
Restrictions treatment variables) can be included in a model statement. The maximum number
of levels for each factor is 500. The factor variables can be of any data type. Real
values are truncated to whole numbers. Strings are truncated to 20 characters.
Missing values are allowed for the dependent variables. For unbalanced designs
and designs with covariates, the maximum size of the GLM design matrix is 1000.
(The size of the design matrix is equivalent to the model degrees of freedom: total
degrees of freedom minus error degrees of freedom.)

Pooling of Internally, the AOV is initially treated as if it’s a full factorial design; sums of
Sums of squares are computed for all possible terms. Then if a term isn’t included in a
Squares model, the sums of squares calculated for that term is pooled in the lowest order
interaction that contains that term as a subset. For example, in the model A B C
A(B(C, the sums of squares for A(B, A(C, and B(C are pooled with the A(B(C
sums of squares.

Example The example data are from a two-factor factorial in a randomized block design
with two covariates (Steel and Torrie, 1980, p. 429). The object of the experiment
was to study how forage fed to guinea pigs from four types of soil at two levels of
fertilization affected weight gain. Animals were selected for blocks based on
initial weight. The initial weights of each subject were also used as a covariate for
error control and to adjust the means. The second covariate, forage consumed, is
affected by the treatments and is included to help interpret the data. The data are
listed below and are stored in the file Sample Data\Forage.sx.

InitialWt Consumed WtGain Blk Soil Fertilize


220 1155 224 1 1 1
222 1326 237 1 1 2
198 1092 118 1 2 1
205 1154 82 1 2 2
213 1573 242 1 3 1
188 1381 184 1 3 2
256 1532 241 1 4 1
202 1375 239 1 4 2
246 1423 289 2 1 1
268 1559 265 2 1 2
266 1703 191 2 2 1
236 1250 117 2 2 2
236 1730 270 2 3 1
259 1363 129 2 3 2
278 1220 185 2 4 1
216 1170 207 2 4 2
262 1576 280 3 1 1
314 1528 256 3 1 2
335 1546 115 3 2 1
268 1667 117 3 2 2
288 1593 198 3 3 1
300 1564 212 3 3 2
283 1232 185 3 4 1
225 1273 227 3 4 2

Chapter 9, Analysis of Variance 247


The analysis is specified on page 244. The results are displayed below.

Analysis of Variance Table for WtGain

Source DF SS MS F P
Blk 2 395.4 197.7 0.46 0.6408
Soil 3 59216.4 19738.8 46.12 0.0000
Fertilize 1 1850.6 1850.6 4.32 0.0597
Soil*Fertilize 3 1136.6 378.9 0.89 0.4764
InitialWt 1 1341.6 1341.6 3.13 0.1020
Consumed 1 10585.1 10585.1 24.73 0.0003
Error 12 5135.5 428.0
Total 23

Note: SS are marginal (type III) sums of squares

Grand Mean 200.42


CV 10.32

Covariate Summary Table

Covariate Coefficient Std Error T P


InitialWt -0.49430 0.27918 -1.77 0.1020
Consumed 0.15837 0.03184 4.97 0.0003

The F test for soil types is significant. In an analysis with covariates, the F tests
test the adjusted means. For example, the null hypothesis that the means for soil
type, adjusted for initial weight and forage consumed, are equal is rejected (p =
0.0000).

The covariates InitialWt and Consumed also appear in the analysis of variance
table. The F tests for the covariates test the hypothesis that the regression
coefficients for the covariates are zero, given that all other analysis of variance and
covariance terms are in the model. The t tests listed in the covariate summary test
the same hypothesis. Initial weight is not significant, but nearly so (p = 0.1020).
Forage consumed is highly significant (p = 0.0003). The negative coefficient for
InitialWt means that guinea pigs with lower initial weights gained more weight
than those with higher initial weights. Weight gain increased with forage
consumed.

Computational Oliver’s (1967) generalization of Yates’ algorithm (Daniel, 1976) is used for
Notes balanced designs. An algorithm similar to Cooper’s (1968) is used to generate
orthogonal polynomials. Unbalanced designs are computed using general linear
models (Searle, 1987; Glantz and Slinker, 1990).

248 Statistix 10 User’s Manual


AOV Results Menu
After the initial analysis is computed and the AOV table is displayed, the Results
menu icon, which appears as a shovel (think “dig-deeper”), on the toolbar is
enabled. Click on the icon to display the results menu. Use the procedures on this
menu to compute means, multiple comparisons, simple effects, linear contrasts,
polynomial contrasts, means plots, residual plots, and to save residuals. This menu
is displayed below.

This results menu is available for all the analysis of variance procedures discussed
in this chapter. Select AOV Table from the menu to redisplay the initial AOV
results. The remaining results options are described on the following pages.

Means and Standard Errors

Single-factor procedures, such as the Completely Randomized Design and the


Randomized Complete Block Design display a table of treatment means along
with the analysis of variance table. Select the Means and Standard Errors
procedure from the results menu to compute least squares means for the remaining
procedures. The dialog box for the guinea pig forage example discussed on page
247 is shown on the next page.

The dialog box list terms in the model available for computing means and standard
errors. Error terms aren’t listed. The main effect Soil and the two-factor
interaction Soil(Fertilize have been selected and moved to the Terms Selected for
Means box. The results are shown on the next page.

The means are least squares estimates based on the model, so they won’t be the
same as the arithmetic means for unbalanced designs or designs with covariates.
The means for analyses with covariates, such as those in this example, are adjusted
for the covariates.

Chapter 9, Analysis of Variance 249


The standard error of a mean is computed using the mean square for error from the
error term associated with it from the original AOV table. The calculations for the
standard errors incorporate the sample sizes and the covariate means, which
explains why the standard errors aren’t all the same in this example. For balanced
designs without covariates, the standard errors are all the same, and the report
format is different from the one below.

Means of WtGain for Soil

Soil N Mean SE
1 6 259.60 8.5942
2 6 126.55 8.4855
3 6 186.16 9.3343
4 6 229.36 9.1457

Means of WtGain for Soil*Fertilize

Soil Fertilize N Mean SE


1 1 3 266.01 12.079
1 2 3 253.19 12.918
2 1 3 144.83 12.770
2 2 3 108.26 12.445
3 1 3 200.65 13.987
3 2 3 171.67 11.963
4 1 3 228.98 14.269
4 2 3 229.74 15.153

The standard error for the difference of two means is included in the report for
balanced designs without covariates. The standard errors for nonorthogonal
designs (unbalanced designs or designs including covariates) are computed as
%L(XNX)!1s2LN, where L is a vector of ones and zeros representing the mean and
(XNX)!1s2 is the variance-covariance matrix.

250 Statistix 10 User’s Manual


Multiple Comparisons
You will often be interested in comparing means for different levels of a main
effect or interaction. This is the function of multiple comparisons procedures.
Multiple comparisons are divided into three categories: all-pairwise comparisons,
comparisons with a control, and comparisons with the best. Statistix offers tests
for all three types of comparisons.

All-pairwise Multiple Comparisons


Two means are said to be similar, or homogeneous, if they’re not significantly
different from one another. This procedure identifies groups (subsets) of similar,
or homogeneous, means. Use of the procedure is first illustrated with an example,
and then some details of its application are discussed.

Let’s use the split-plot example from page 232, where the subplot factor Date was
the date of harvest of alfalfa. Date has four levels; 36, 55, 72, and 106 representing
the number of days into the growing season of each harvest. The dependent
variable Tons is the yield of alfalfa.

First select the main effects and/or interactions which you want comparisons for
and move them to the Terms Selected for Mean Comparisons box. Date was the
only significant term in the example. Next select a comparison method by clicking
on one of the five Comparison Method radio buttons. Enter a value for the
rejection level in the Alpha edit control. Select the report format (examples of
both formats are given below).

Chapter 9, Analysis of Variance 251


The example dialog box above is used to compute Tukey’s HSD comparisons for
the Date term using 0.05 for the rejection level. The results are shown below.

Tukey HSD All-Pairwise Comparisons Test of Tons for Date

Date Mean Homogeneous Groups


106 1.7811 A
71 1.6911 AB
54 1.5744 B
35 1.3406 C

Alpha 0.05 Standard Error for Comparison 0.0533


Critical Q Value 3.810 Critical Value for Comparison 0.1435
Error term used: Variety*Blk*Date, 36 DF
There are 3 groups (A, B, etc.) in which the means
are not significantly different from one another.

The second column of the results shows the least square means of Tons for the
levels of the factor Date. The means have been sorted in descending order so the
largest one is listed in the first row, the next to largest in the second row, and so
on.

The columns of the letters A and B under the heading “Homogeneous Groups”
indicate which means are not significantly different from one another. There are
three columns in the example since there are three groups of similar or
homogeneous means. The first group contains the means for Date = 106 and Date
= 71. The second group contains the means for Date = 71 and Date = 54. The
third group contains only the mean for Date = 35. As you see in this example, it’s
not unusual for the groups to overlap, although they need not. There are four pairs
of means that are different in this example. The mean for Date = 106 is not in
group B, and the mean for Date = 54 is not in group A. So the mean of Date = 106
is different than the mean for Date = 54. Similarly, the mean for Dates = 107, 71,
and 54 are not in group C, and the mean for Date = 35 is not in group A or B, so
the mean for Date = 35 is different than the other three means.

You may prefer the triangular matrix report format because it’s easier to identify
means that are significantly different. This report format for the same data is
shown below.

Tukey HSD All-Pairwise Comparisons Test of Tons for Date

Date Mean 35 54 71
35 1.3406
54 1.5744 0.2339*
71 1.6911 0.3506* 0.1167
106 1.7811 0.4406* 0.2067* 0.0900

Alpha 0.05 Standard Error for Comparison 0.0533


Critical Q Value 3.810 Critical Value for Comparison 0.1435
Error term used: Variety*Blk*Date, 36 DF

The treatment levels are listed along the top and down the left side of the table.
The values in the body of the table are differences between pairs of means. The
critical value for a comparison, 0.1487, is the minimum difference between two

252 Statistix 10 User’s Manual


means needed for significance. Pairs that are significantly different are flagged
with an asterisk.

It’s easy to use the critical value to construct confidence intervals for the
differences of any two means. Suppose you were interested in 95% confidence
bounds around the difference of Date = 54 and Date = 71. Simple subtract and add
the critical value of the comparison from the difference: 0.1167 ± 0.1487 =
!0.0320 to 0.2654. Note that the confidence interval contains zero, which is
expected since the difference was not significant.

All-pairwise It’s important you understand the difference between (1) the hypotheses being
Comparison tested by the overall F test for a main treatment effect in analysis of variance and
Methods (2) the hypotheses being tested by pairwise comparisons procedures. A contrast is
any linear combination of treatment means such that the linear coefficients sum to
zero (see Contrasts on page 261). A pairwise comparison of two means is a
special case of a contrast where the contrast coefficients are 1 and !1 for the means
being compared, and 0 for all other means.

The overall F test for a treatment effect in AOV is testing the hypothesis that all of
the means are equal. You can think of it as a test of whether all possible contrasts
are zero. If the overall F is significant, it means that there is some contrast that’s
significant, but it doesn’t guarantee that any pairwise comparison is particularly
important. The set of pairwise comparisons is a small subset of the entire set of all
possible contrasts. The F test has to be a conservative test because it must guard
against type I errors (rejecting a null hypothesis when it’s true) over the entire set
of all possible contrasts, not just the smaller subset of pairwise comparisons. If
you’re interested only in the set of pairwise comparisons, you can construct a more
powerful test than the overall F test over this restricted space.

If you’re interested in a single comparison of two means, the most powerful


procedure is the T test. For a single such comparison, the probability of falsely
rejecting a true null hypothesis (type I error) is whatever the significance level of
the T test is. Suppose that there are two comparisons of interest to you. Suppose
you test each one at the level α with a T test. The probability of making a type I
error in each comparison is α, so the probability of making at least one type I error
over both comparisons is greater than α. As the number of comparisons grows,
the probability of making at least one type I error grows toward 1. This probability
of making at least one type I error for all comparisons is called the experimentwise
error rate, in contrast to the comparisonwise error rate. The T test controls the
comparisonwise error rate at α but allows the experimentwise error rate to increase
as the number of comparisons increases. Experimentwise error rate refers to the
maximum experimentwise error rate under a complete or partial null hypothesis.
Under a complete null hypothesis, all the population means are equal; under a
partial null hypothesis, only some of the population means are equal.

If there are P means, there are m = P(P!1)/2 pairwise comparisons, so the number

Chapter 9, Analysis of Variance 253


of comparisons grows rapidly as the number of means increases. Some control
over the experimentwise error rate is desirable. Numerous methods have been
proposed for this, and there’s some disagreement as to the best procedures. The
following discussion describes the procedures available in Statistix.

First, some terminology: Suppose Mi and Mj are two means. The comparison
between means I and j is Lij = Mi ! Mj. For a complete, balanced AOV, the
standard error of Lij is SE(Lij) = (2(MSE/n)½, where MSE is the mean square for
error and n is the number of samples present at a level of the factor of interest. For
the comparison Lij to be significant, its absolute value must exceed some critical
value C, where C depends on the method of comparison being used. Confidence
intervals for a comparison are computed as Lij ± C.

The most powerful (least conservative) comparison procedure is the LSD, or Least
Significant Difference method. The critical value for a comparison is SE(Lij) T,
where T is Student’s t-statistic for the degrees of freedom associated with MSE.
This method is also called the T method. LSD controls the comparisonwise error
rate at α but allows the experimentwise error rate to increase as the number of
comparisons increases. Some advocate using this method only if the overall F test
is significant, leading to what has sometimes been called the PSD, or Protected
Significant Difference. Contrary to what has sometimes been claimed, the PSD
method doesn’t control the experimentwise error rate if there are more than three
levels for the factor of interest.

As we noted earlier, the overall F test is testing a much broader range of


hypotheses than a multiple comparison test and so it must be more conservative. If
the set of pairwise comparisons are of primary interest, then the so-called protected
approach (proceeding only if the overall F is significant) can be refuted to some
extent regardless of the comparison method because the F test sacrifices power to
test hypotheses that are not of direct interest. However, such cases are probably
the exception rather than the rule.

You should use the LSD method if there are a few preplanned comparisons that are
of primary interest. However, inspecting the means for large differences before
deciding which comparisons to make invalidates its use. The LSD procedure is the
most powerful pairwise comparison procedure, but it will generally have the
highest experimentwise error rate. We mentioned earlier that the LSD approach
controls the comparisonwise error rate at α. If you use the LSD method and report
significant comparisons, you should be prepared to justify why you didn’t find it
necessary to control the experimentwise error rate.

The LSD procedure can be modified to prevent the experimentwise error from
growing as the number of comparisons increases. The general idea is to make it
more difficult to reject as the number of comparisons increases, which can be done
by increasing the critical value of T as the number of comparisons increases.
Suppose T(p) is the T value corresponding to a two-tailed significance level of p
for Student’s t. For the LSD procedure, p is the constant α. To control the

254 Statistix 10 User’s Manual


experimentwise error rate, p should be some decreasing function of m, where m is
the number of comparisons. Two common methods for this are Bonferroni’s and
Sidak’s. Bonferroni’s— probably the more popular of the two—uses the function
p = α/m, and Sidak’s uses the function p = 1 ! (1 ! α)1/m. Using either of these
methods results in an experimentwise error rate of less than α. The problem with
these procedures is they rapidly grow conservative as m increases; in effect, the
experimentwise error rate is reduced too much and real differences do not get
detected (test power is lost). Bonferroni’s is generally more conservative than
Sidak’s. Because of rapidly decreasing power, these procedures are not
recommended for general use although they can be useful when the number of
means, and hence the number of comparisons, is small.

Tukey’s method is the most useful pairwise comparison procedure Statistix


performs. It controls the experimentwise error rate, yet still retains good power.
It’s based on the Studentized range statistic. Suppose there are P means for the
factor of interest, with X(1) being the smallest and X(P) being the largest. The
standard error of a mean is (MSE/n)½, where MSE is the mean square for error and
n is the number of samples within each level. (In terms of SE(Lij), which is
displayed by Statistix, the standard error of a mean is SE(Lij)/(21/2).) Under the
usual assumptions, the statistic (X(1)!X(P)) / (MSE/n)½ then has a Studentized range
distribution if there are no differences between the population means. The critical
value for a comparison Lij is C = (MSE/n)½ Q(P,DF), where Q(P,DF) is the
Studentized range value for P means and DF degrees of freedom (degrees of
freedom associated with MSE) at the desired rejection level α. Tukey’s procedure
may find significant comparisons even if the overall F test is not significant
because Tukey’s test restricts itself to the pairwise comparison subset of contrast
space. This is mentioned because it helps in deciding whether to use Tukey’s
procedure or Scheffe’s procedure.

Basically, Scheffe’s procedure treats pairwise comparisons as “just another


contrast”. Suppose you’ve just observed a significant overall F. Clearly you’d be
interested in investigating the pattern(s) among the means that produced this result.
In this context, pairwise comparisons are just one of any number of contrasts that
may interest you; you are interested in general “data-snooping”. Scheffe’s
procedure controls the experimentwise error rate, but here the “experiment” is not
just the m = P(P!1)/2 comparisons but all possible contrasts. The price you pay
for such general protection is that Scheffe’s procedure is more conservative than
Tukey’s; it won’t detect some differences between means that Tukey’s will. If the
overall F test was not significant, Scheffe’s comparisons will never be significant
either. The critical value for Scheffe’s is C = SE(Lij) [(P!1) F(P!1, DF)]½, where
F(P!1, DF) is the appropriate F value.

Two comparison procedures that are very popular in the natural sciences and other
areas are Duncan’s New method and the Student-Newman-Keuls, or SNK,
method. These procedures are not recommended and Statistix doesn’t compute
them. Duncan’s New method controls the comparisonwise error rate at α and
generally gives results similar to the LSD procedure (Hsu, 1996). The SNK

Chapter 9, Analysis of Variance 255


procedure doesn’t control the experimentwise error rate under a partial null
hypothesis and cannot be recommended (Einot and Gabriel 1975; Hsu, 1996).
There are a number of procedures more powerful than Tukey’s that still control the
experimentwise error rate (Ryan 1960, Einot and Gabriel 1975, Welsch 1977,
Begun and Gabriel 1981). Like Duncan’s New and SNK, these procedures are
multiple-stage tests, which means the critical value doesn’t remain constant for all
comparisons but rather varies as the homogeneous subsets are constructed. The
disadvantages of such multiple-stage procedures are that they’re more complex to
explain and compute and, in particular, do not permit the construction of
confidence intervals, which is often useful when you present your results.

The basis for deciding which procedure to use is somewhat subjective and
philosophical. Hsu (1996) recommends Tukey’s method (also called the Tukey-
Krammer method when used for unbalanced data) for preplanned all-pairwise
comparisons.

Comparisons Statistix computes quantiles for Student’s t distribution using a procedure patterned
of Means - after Hill (1970). Quantiles for the F distribution are found by finding the inverse
Computational of the corresponding beta distribution using Newton’s method. The algorithm
Notes used to perform this is similar to Majumder and Bhattacharjee’s (1973), although a
different procedure, described in Probability Functions (Chapter 17), is used to
compute the cumulative distribution function of the beta distribution. The
quantiles for the Studentized range distribution are computed with a procedure
patterned after Lund and Lund (1983). The standard errors of comparisons for
nonorthogonal designs (unbalanced designs or designs including covariates) are
computed as %L(XNX)!1s2LN, where L is a vector of ones and zeros representing the
mean difference and (XNX)!1s2 is the variance-covariance matrix.

Multiple Comparisons with a Control


This procedure makes use of Dunnett’s test for comparing all treatment means
with the mean of a control. This test is more powerful than using Tukey’s all-
pairwise test because there are fewer comparisons.

We’ll use the randomized complete block example from page 222, where the
treatment factor Trt was the type of fungicide applied to batches of 100 soybeans.
Trt has five levels; the first is a no-fungicide control, and the remaining four are
different types of fungicide. The dependent variable Failures is the number of
beans out of 100 that failed to sprout.

The dialog box is displayed on the next page. First select the main effect or
interaction for which you want comparisons and move it to the Term Selected for
Mean Comparisons box. Next enter the value of the level that identifies the

256 Statistix 10 User’s Manual


control treatment. If you’re testing an interaction term, enter one value for each
factor in the term, separated by commas. Enter a value for the rejection level in the
Alpha edit control. Select an Alternative Hypothesis. Select “not equal to the
control” to perform the two-sided test.

The results for the example are shown below. The table displays the difference
between the control mean and the remaining means. Means significantly different
from the control mean (differences significantly different from zero) are flagged
with an asterisk. 95% simultaneous confidence interval of the mean differences
are also displayed.

Two-sided Dunnett's Multiple Comparisons with a Control of


Failures
Control: Trt=Control

Simultaneous 95% confidence intervals of treatment mean - control


mean
Lower Upper
Trt Mean Bound Difference Bound
Control 10.800
Fung #1 6.200 -8.584 -4.600* -0.616
Fung #2 8.200 -6.584 -2.600 1.384
Fung #3 6.600 -8.184 -4.200* -0.216
Fung #4 5.800 -8.984 -5.000* -1.016

Alpha 0.05 Standard Error for Comparison 1.4711


Critical D Value 2.708 Critical Value for Comparison 3.9839
Error term used: Blk*Trt, 16 DF

See Hsu (1996) for computational details.

Chapter 9, Analysis of Variance 257


Multiple Comparisons with the Best
This procedure makes use of Hsu’s test for multiple comparisons with the best
(Hsu, 1996). It’s useful when you’re most interested in identifying those
treatments that may provide the best result. This test is more powerful than using
Tukey’s all-pairwise test because there are fewer comparisons.

First select the main effects or interactions for which you want comparisons and
move it to the Terms Selected for Mean Comparisons box. Enter a value for the
rejection level in the Alpha edit control. Select a value for what constitutes the
Best Mean: the one with the largest value, or the one with the smallest value.

Let’s use the randomized complete block example from page 222, where the
treatment factor Trt was the type of fungicide applied to batches of 100 soybeans.
The dependent variable Failures is the number of beans out of 100 that failed to
sprout. The results for the analysis specified above are shown below.

Hsu's Multiple Comparisons with the Best of Failures for Trt

Simultaneous 95% confidence intervals of mean - smallest of other


means
Lower Upper
Trt Mean Bound Difference Bound
Control 10.800 0.000 5.000* 8.447
Fung #1 6.200 -3.047 0.400 3.847
Fung #2 8.200 -1.047 2.400 5.847
Fung #3 6.600 -2.647 0.800 4.247
Fung #4 5.800 -3.847 -0.400 3.047

Alpha 0.05 Standard Error for Comparison 1.4711


Critical D Value 2.343 Critical Value for Comparison 3.4468
Error term used: Blk*Trt, 16 DF

The table displays the difference between each mean and the best (lowest in this
example) of the remaining means. Means significantly different from the best
mean are flagged with an asterisk. 95% simultaneous confidence interval of the
differences from the best are also displayed. See Hsu (1996) for computational
details.

258 Statistix 10 User’s Manual


Simple Effects
When an interaction in an analysis of variance is significant, there are unique
effects associated with combining levels of factors. The effects of a particular
level of a factor can’t be assumed to be the same for all levels of the factors with
which it interacts. It’s appropriate to examine the outcome in terms of individual
levels of factors. This is called an analysis of simple effects.

Simple main effects are main effects of one factor that are limited to a particular
level of other factors. Likewise, simple interaction effects are interaction effects
that are limited to a particular level of other factors. This procedure produces an
analysis of variance table that tests for simple effects, and optionally a series of all-
pairwise comparisons of means, where each table has the levels of some factors
held constant.

First, select an interaction term from the Interaction Terms in Model box and
move it to the Interaction of Interest box. Next, select one ore more of the terms
listed in the Simple Effects Candidates box and move them to the Simple Effects
Terms box.

If you want to obtain all-pairwise comparisons for the selected simple effects
terms, check the Include Pairwise Comparisons check box, then select a
Comparison Method and Report Format, and enter a value for the rejection level
in the Alpha box. See All-pairwise Multiple Comparisons on page 251 for a
description of these options.

Chapter 9, Analysis of Variance 259


Except for the LSD method, the pairwise comparison methods control the
experimentwise error rate at α, based on the number of means involved in the
comparisons. You can have the Comparison Error Rate based on the number of
means in the simple effects terms, or the full interaction term. The later is more
conservative, and gives the same results had you used the All-pairwise Multiple
Comparisons procedure for the full interaction term.

We’ll use a two-way AOV from Winer et al. (1991) to illustrate the simple effects
procedure. Behavioral changes were recorded after giving three different drugs to
two categories of patients. The data are available in the file Sample Data\Winer
Table 6-3.sx. The basic AOV table displayed below shows a significant
Category(Drug interaction.

Analysis of Variance Table for Behavior

Source DF SS MS F P
Category 1 18.000 18.0000 2.04 0.1789
Drug 2 48.000 24.0000 2.72 0.1063
Category*Drug 2 144.000 72.0000 8.15 0.0058
Error 12 106.000 8.8333
Total 17 316.000

The analysis of simple effects for Drug within Category(Drug is specified using
the dialog box on the preceding page. The results are displayed below.

Simple Effects of Behavior for Drug within Category*Drug

Source DF SS MS F P
Drug(Category=Schizophrenic) 2 24.000 12.0000 1.36 0.2939
Drug(Category=Depressive) 2 168.000 84.0000 9.51 0.0034
Error 12 106.000 8.8333

Tukey HSD All-Pairwise Comparisons Test of Behavior


for Drug within Category*Drug

Category Drug Mean Homogeneous Groups


Schizophrenic 2 8.0000 A
3 6.0000 A
1 4.0000 A

Category Drug Mean Homogeneous Groups


Depressive 3 12.000 A
1 10.000 A
2 2.000 B

Alpha 0.05 Standard Error for Comparison 2.4267


Critical Q Value 3.783 Critical Value for Comparison 6.4919

We conclude from the F tests that while the effect of Drug is not significant among
schizophrenic patients, it is significant among depressive patients (p = 0.0034).
The mean comparisons table shows that the mean for drug number 2 is
significantly different from the remaining drugs among depressive patients.

See Winer et al. (1991) for a description of simple effects and computational
details.

260 Statistix 10 User’s Manual


Contrasts
This powerful option computes any linear one-degree-of freedom contrast for any
effect or interaction. Linear contrasts are linear combinations of the means for any
effect or interaction, and they’re valuable for examining the “fine structure” of the
data after the overall F test indicates that the effect or interaction is significant.
Contrasts can’t be computed for interaction terms in models that have multiple
error terms.

Suppose, in the randomized block example on page 222, you’re interested in


whether the mean for the control (no fungicide) is different from the mean of the
four treatments (fungicides applied). To make this comparison, you enter “1 !0.25
!0.25 !0.25 !0.25” for the contrast coefficients.

To specify the general contrast, first select the main effect or interaction for which
you want to construct contrasts and move it to the Contrast Term box—Trt in our
example.

You can specify up to five contrasts at once. The coefficients entered must sum to
zero. The absolute values affect the resulting value of the contrast, but not the tests
of significance. The coefficients can be entered as integer or real values. The
ordering of the coefficients is determined by the values used to represent the levels
of the factors. In our example, the no-fungicide control is represented in the
variable Trt as 1, the four fungicides are represented as 2, 3, 4, and 5. The list of
coefficients “1 !0.25 !0.25 !0.25 !0.25” can be abbreviated using a repeat factor:
“1 4(!0.25)”. When entering coefficients for an interaction term, you enter them

Chapter 9, Analysis of Variance 261


in the order with rightmost subscripts changing fastest (in the same order that
means are listed using the Mean and Standard Errors procedure).

The results for the example are presented below.

AOV Contrasts of Failures by Trt

Contrast Coefficients: 1 -0.25 -0.25 -0.25 -0.25

Contrast 4.1000 SS (Contrast) 67.240


Scheffe's F 3.11 P (Scheffe's F) 0.0453
T-Statistic 3.53 P (T-Statistic) 0.0028
SE (Contrast) 1.1630

Error term used: Blk*Trt, 16 DF

The value of the contrast (4.1000) is the difference of the mean of the control and
the average of the remaining means. Scheffe’s F method of significance testing for
arbitrary simultaneous contrasts is used to test the hypothesis that the contrast is
zero. The contrast in the above example is seen to be significant at the 5% level (p
= 0.0453). Scheffe’s procedure is appropriate for any number of a posteriori
contrasts, which means it can be used to test hypotheses that arise after the data are
collected and inspected. It protects you from making too many type I errors
(rejecting a correct null hypothesis) during such “data-snooping”. In the example
report on the preceding page, the sum of squares due to contrast is computed in the
usual way, as illustrated in Sections 12.7 and 12.8 of Snedecor and Cochran
(1980). The computational methods used are discussed in Section 6.4 of Scheffe
(1959).

The statistic Scheffe’s F is computed as SSC/(DF(MSE), which is equivalent to


L2/(DF(SE(L)2), where SSC is the sum of squares due to the contrast, MSE is the
mean square for error, L is the value of the contrast, SE(L) is the standard error of
the contrast, and DF is the degrees of freedom associated with the contrast. The
same error term is used as would be used for the F test in the original AOV table.

In addition to Scheffe’s F method, Student’s t test is performed. Student’s t test is


appropriate for a priori tests (contrasts that had been planned before the data were
inspected). Student’s t test doesn’t control the experimentwise error rate, as we
discussed in the comparisons of means section (page 253). Student’s t-statistic is
computed as L/SE(L).

The standard errors of a contrast L for nonorthogonal designs (unbalanced designs


or designs including covariates) are computed as %Ls2(XNX)!1LN, where s2(XNX)!1
is the variance-covariance matrix.

By Variable It’s possible to compute contrasts for a term in the model, but at each level of
another factor in the model. Using the guinea pig forage example from page 247,
let’s compute a contrast for comparing soil types 1 and 2, at each level for the

262 Statistix 10 User’s Manual


factor Fertilize (1 = not fertilized, 2 = fertilized). This is specified in the dialog
box on the next page.

The results are presented below.

AOV Contrasts of WtGain by Soil for Fertilize

Contrast Coefficients: 1 -1 0 0

FOR Fertilize = 1
Contrast 121.17 SS (Contrast) 19378
Scheffe's F 15.09 P (Scheffe's F) 0.0002
T-Statistic 6.73 P (T-Statistic) 0.0000
SE (Contrast) 18.007

FOR Fertilize = 2
Contrast 144.93 SS (Contrast) 25286
Scheffe's F 19.70 P (Scheffe's F) 0.0001
T-Statistic 7.69 P (T-Statistic) 0.0000
SE (Contrast) 18.855

Error term used: Blk*Soil*Fertilize, 12 DF


The contrast for the Fertilize=1 test is computed using the means for the Soil(
Fertilize interaction, but only using the means for the unfertilized cells. The
contrast for the Fertilize=2 test is computed using only the means for the fertilized
cells.

Chapter 9, Analysis of Variance 263


Polynomial Contrasts
This option computes the polynomial decomposition of the sums of squares for
any main effect. This is very useful for determining the existence and nature of
trends in the treatment level means.

In the split-plot example on page 232, we’re interested in examining the trends of
yield Tons as a function of cutting date Date. In the dialog box above, we’ve
entered Date for the Contrast Term and entered a value of 3 for Degree.

By specifying polynomials up to degree 3 be computed, we will get sums of


squares for linear (degree 1), quadratic (degree 2), and cubic (degree 3) trends due
to Date. The results are shown below.

Polynomial Contrasts of Tons by Date Date of harvest

Degree = 1, Linear Trend

Contrast 0.3073 SS (Contrast) 1.7003


Scheffe's F 22.20 P (Scheffe's F) 0.0000
T-Statistic 8.16 P (T-Statistic) 0.0000
SE (Contrast) 0.0377

Degree = 2, Quadradic Trend

Contrast -0.1200 SS (Contrast) 0.2594


Scheffe's F 3.39 P (Scheffe's F) 0.0284
T-Statistic -3.19 P (T-Statistic) 0.0030
SE (Contrast) 0.0377

Degree = 3, Cubic Trend

Contrast 0.0123 SS (Contrast) 2.72E-03


Scheffe's F 0.04 P (Scheffe's F) 0.9909
T-Statistic 0.33 P (T-Statistic) 0.7459
SE (Contrast) 0.0377

Error term used: Variety*Blk*Date, 36 DF

Strong support exists for a linear trend with Date (p = 0.0000) in the example. The
positive value for the contrast (0.3073) indicates that Tons increase with increasing
Date. There also appears to be some evidence of a quadratic trend in addition to
the linear trend. Perhaps this indicates that a date before November 10 should

264 Statistix 10 User’s Manual


have been used as the end of the effective growing season. For example, if you use
October 15 instead (Date = 80 instead of 106), the linear trend is stronger and the
quadratic trend disappears.

Remember that the actual spacings of the treatment levels are used to compute
polynomial contrasts. Because calculating unequally spaced polynomials can be
quite tedious, researchers commonly ignore the unequal spacing of levels and treat
them as equally spaced, even though this can result in substantial errors. As the
example shows, the choice of level spacings can have considerable influence on
the results. Because of the ease with which this option can handle unequal
spacings, there’s little excuse for not using them.

Plots
The Plots submenu offers an option to plot means for main effects and two-factor
interactions, and two plots for examining the residuals.

The Residuals By Fitted Values plot is useful for examining whether the
variances are equal among the groups. If the order of the groups is meaningful,
then systematic departures from equality can be seen in the plot.

The Normal Probability Plot plots the residuals against the rankits. Plots for
normal data form a straight line. The Shapiro-Wilk statistic for normality is also
reported on the plot. See Chapter 12 for details.

We’ll illustrate the Means Plot using the two-factor repeated measures example
discussed on page 241. We’re interested in looking at how alcohol affects the
aggressiveness of people with and without antisocial personality disorder (ASP).
The dialog box is displayed below.

Chapter 9, Analysis of Variance 265


A mean plot plots the dependent variable along the Y axis. The factor you specify
in the Means Factor box is plotted along the X axis. If you specify a By Factor,
one line-plot of means are plotted for each level of the factor specified. The Means
Factor is limited to 30 levels. The By Factor is limited to six levels.

Next select the Plot Type, either bar chart or line chart. The bar chart uses vertical
bars to represent the means. The line chart uses circles to mark the means, and the
circles are connected sequentially with lines.

Check the Plot C.I. Bars check box to include vertical bars representing the
confidence intervals of the means. Specify a value for C.I. Percent Coverage to
control the width of the confidence intervals. Unlike the confidence intervals
produced by the Error Bar Chart described in Chapter 4, the intervals produced
here use the pooled estimate of error.

You can enter Low, High, and Step values to control the Y axis scale. You can
use this feature to create a meaningful interval width and interval boundaries.

The means plot for the alcohol example specified above is shown on the next page.
Recall that the F test for the P_Type(Drink interaction was significant. The means
plot shows that while aggressiveness increases for both personality types while
drinking alcohol, the increase is more dramatic for the subjects diagnosed with
ASP.

266 Statistix 10 User’s Manual


If you specified more than one dependent variable for your analysis of variance,
you view a plot for one dependent variable at a time. Arrows appear on the
toolbar, as shown below.

Press the right-arrow button on the toolbar to display the plot for the next
dependent variable. Press the left-arrow button to display the plot for the previous
dependent variable.

Save Residuals
Select Save Residuals to compute the residuals, fitted values, or adjusted data for a
particular model and store them for later examination. An example dialog box is
shown below.

Chapter 9, Analysis of Variance 267


You can type in the names of new variables in the spaces provided, or you can
click on the down arrow to select the name of an existing variable from a drop
down list. When you’ve finished entering variable names, press OK to start the
computations. The results aren’t displayed. The residuals are stored as variables
and can be examined using procedures such as Scatter Plot, Shapiro-Wilk Test,
Print, and Runs Test.

The residuals are used to evaluate how well a model fits the data. A residual is
defined as the difference between the actual observed response and that predicted
by the fitted model. Residuals can help detect bad values (outliers) and can also
help suggest more appropriate models or transformations to apply. Consult Daniel
(1976) for more detail.

The Adjusted Data Variable option is only available when you included covariates
in the model using the General AOV/AOCV procedure. The adjusted data are the
dependent variable data adjusted for the values for the covariates on a case by case
basis.

If you specified more than one dependent variable for your analysis of variance,
you can only save the residuals for the first dependent variable listed.

Covariance The Covariance Matrix option only appears on the results menu for repeated
Matrix measures designs. See page 243 for an example.

268 Statistix 10 User’s Manual


C H A P T E R

10
Nonlinear Models
These procedures are used to fit smoothed curves and surfaces to two-dimensional
or multidimensional data.

The Spline Interpolation procedure fits smooth curves to X-Y data by


linear, cubic, or tension spline interpolation.

Polynomial Regression is used to fit a polynomial regression model of a


dependent variable on one or more independent variables using polynomials up to
degree nine.

Nonlinear Regression uses the Marquardt algorithm to fit nonlinear regression


models to data with one or more independent variables. Estimates of the
regression coefficients are provided as well as extensive options for graphing
curves and surfaces and residual analysis.

The Loess procedure fits smoothed curves and surfaces to multivariate scattered
data using local linear regression with nearest neighbor bandwidth. Resulting
models are evaluated using graphs and residual analysis.

Spline Interpolation
The Spline Interpolation procedure fits a curve to X-Y data such that the curve
passes through each data point. You can save interpolated values for points that
fall between the original data points. Linear splines connect the points using
straight lines using linear interpolation. Cubic splines connect the points using
smooth curved lines by interpolation using piecewise third-order polynomials
(cubics). Tension splines interpolate using a spline under tension. They’re similar
to, but somewhat tighter, than a cubic spline, and less likely to have spurious
inflection points.

Specification The dialog appears on the next page. Select the variables for the plot by moving
one variable to the X Axis Variable box and one to the Y Axis Variable box.
Select a Method: Linear, Cubic, or Tension spline. Tension splines require a
numeric value for Tension. A low tension value will plot a spline similar to a
cubic spline, a high value will plot curves that will approach a linear spline.

Chapter 10, Nonlinear Models 269


Example We’ll illustrate cubic splines by interpolating the sine function using seven random
points of the unit circle. The data were created using two transformations:
X = Random(uniform,0,2*pi)
Y = Sin(X)

The dialog box is shown above. The resulting plot is displayed below.

Save Once the spline curve is plotted, the Results menu icon, which appears as a shovel
Interpolated (think “dig-deeper”), on the toolbar is enabled. Click on the icon to display the
Values results menu. The Save Interpolated Values procedure on the results menu lets
you save interpolated values from the fitted curve. You enter the name of a new or

270 Statistix 10 User’s Manual


existing variable into the appropriate space of the dialog box and press OK.
Interpolated values are computed and saved in the variable specified.

The plot is computed for selected cases that have valid values for both the X and Y
variable. To compute interpolated values, you must first enter the values you want
to compute interpolated values for in the X variable. Either enter M for missing in
the corresponding cases for the Y variable, or omit the cases using the
Omit/Select/Restore Cases procedure. Such cases will be ignored when fitting the
curve, but will be used to save interpolated values.

Computational See Forsythe et al. (1977) for details on cubic splines. See Cline (1974) for details
Notes on tension splines.

Polynomial Regression
Polynomial Regression is a curvilinear response model where a response variable
is modeled using polynomials of one or more independent variables. It’s a special
case of the general linear regression model because it can be fitted using
transformations of the independent variables. A second-order polynomial model is
sometimes a good approximation when the true response function is unknown or
complex.

Specification

To specify a polynomial regression model, first select the dependent variable.


Highlight the variable in the Variables list, then press the right-arrow button next
to the Dependent Variable box to move the highlighted variable into that box.
Then select one, two, or three independent variables and move them to the
Independent Variables list box. For weighted regression, move the variable
containing the weights to the Weight Variable box. Specify the Degree for the

Chapter 10, Nonlinear Models 271


model. Enter 2 for a second-order model, 3 for a third-order model, etc.

Data You can include up to three independent variables in the model. If any values
Restrictions within a case are missing for any of the variables in the model, the case is dropped.
If you specify a weight variable, cases with zero weights are deleted. The
maximum value for degree is 9 for models with one independent variable, and 2
for models with two or three independent variables.

Differenced It’s common practice to express the independent variables in a polynomial


Scale regression model as deviations around their means. For example, the second-order
model with one independent variable is written
Yi = β0 + β1xi + β2xi2 + εi, where xi = Xi - &
X
The reason behind this notation is that X, X2, X3, etc. will typically be highly
correlated, which can cause computational difficulties. Computing the regression
using xi rather than Xi reduces multicollinearity, and avoids the associated
computation problems.

The coefficients and their associated standard errors and t tests are expressed in the
differenced scale. The analysis of variance table and R2 are not affected by this
transformation.

Example I The example data are from Neter et al. (1989). A cafeteria chain designed an
experiment to study the effect of self-service coffee dispensers on coffee sales.
They installed varying numbers of dispensers at different locations and recorded
total coffee sales (hundreds of gallons). The data are displayed below and are
available in the file Sample Data\Coffee.sx.

Dispenser Sales
0 508.1
0 498.4
1 568.2
1 577.3
2 651.7
2 657.0
3 713.4
3 697.5
4 755.3
4 758.9
5 787.6
5 792.1
6 841.4
6 831.8

The second-order regression model is specified using the dialog box on the
preceding page. The results are displayed on the next page.

The report includes a coefficient table, model summary statistics, and an analysis

272 Statistix 10 User’s Manual


of variance table. The interpretation of the information is the same as for a linear
regression model. The figures in the coefficient table are reported using the
differenced scale. So the fitted model is
Y = 705 + 54.9 (X - & X)2
X) - 4.25 (X - &

For convenience, the coefficients are also reported in the original scale.
Y = 503 + 80.4 X - 4.25 X2

Polynomial Regression of Sales

Differenced Scale (i.e., Xi - Xbar)


Predictor
Variables Coefficient Std Error T P VIF
Constant 705.474 3.20799 219.91 0.0000
Dispenser 54.8929 1.05006 52.28 0.0000 1.0
Dispenser² -4.24881 0.60625 -7.01 0.0000 1.0

Original Scale
Predictor
Variables Coefficient Std Error
Constant 502.556 4.85003
Dispenser 80.3857 3.78605
Dispenser² -4.24881 0.60625

R² 0.9961 Mean Square Error (MSE) 61.7473


Adjusted R² 0.9953 Standard Deviation 7.85795
AICc 66.791
PRESS 1106.8

Source DF SS MS F P
Regression 2 171773 85886.7 1390.94 0.0000
Residual 11 679 61.7473
Total 13 172453

Lack of Fit 4 388 96.9089 2.33 0.1554


Pure Error 7 292 41.6550

Cases Included 14 Missing Cases 0

Since there are two repeat observations for each number of dispensers installed,
you have an independent measure of variation, the pure error listed at the bottom
of the report. The F-test for lack of fit is not significant at alpha = 0.05, so we can
conclude the quadratic response function is apt.

The t-test for Dispenser2 in the coefficient table tests the hypothesis that the
coefficient for the quadratic term equals zero. The test is significant (p < 0.0001),
so we conclude that the quadratic effect does exist.

As with linear regression, the residual plots available on the results menu should
be viewed to evaluate goodness of fit and to check that the assumptions of constant
error variance and normality are reasonable.

Example II We’ll consider a second example that involves two independent variables. The
data are from a study that looked at the effects of the charge rate and temperature
on the life of power cells (Neter et al., 1989). The data are displayed below and

Chapter 10, Nonlinear Models 273


are available in the file Sample Data\PowerCells.sx.

Temp Rate Cycles Temp Rate Cycles


10 0.6 150 20 1.0 184
10 1.0 86 20 1.4 109
10 1.4 49 30 0.6 279
20 0.6 288 30 1.0 235
20 1.0 157 30 1.4 224
20 1.0 131

To specify the second-order regression model, you select Cycles as the dependent
variable, and Rate and Temp as independent variables. A value of 2 is entered for
Degree (dialog not shown). The results are displayed below.

Polynomial Regression of Cycles

Differenced Scale (i.e., Xi - Xbar)


Predictor
Variables Coefficient Std Error T P VIF
Constant 162.842 16.6076 9.81 0.0002
Rate -139.583 33.0418 -4.22 0.0083 1.0
Temp 7.55000 1.32167 5.71 0.0023 1.0
Rate² 171.217 127.125 1.35 0.2359 1.1
Temp² -0.10605 0.20340 -0.52 0.6244 1.1
Rate*Temp 2.87500 4.04677 0.71 0.5092 1.0

Original Scale
Predictor
Variables Coefficient Std Error
Constant 337.721 149.962
Rate -539.518 268.860
Temp 8.91711 9.18249
Rate² 171.217 127.125
Temp² -0.10605 0.20340
Rate*Temp 2.87500 4.04677

R² 0.9135 Mean Square Error (MSE) 1048.09


Adjusted R² 0.8271 Standard Deviation 32.3742
AICc 119.16
PRESS 41593

Source DF SS MS F P
Regression 5 55365.6 11073.1 10.57 0.0109
Residual 5 5240.4 1048.09
Total 10 60606.0

Lack of Fit 3 3835.8 1278.59 1.82 0.3738


Pure Error 2 1404.7 702.333

Cases Included 11 Missing Cases 0

The lack of fit test tells us that the quadratic model is adequate. However, you’ll
notice that the coefficients for the squared terms and interaction aren’t significantly
different from zero. You can formally test the hypothesis that all three of these
coefficients are zero using a partial F-test.
F* = SSR(x12, x22, x1x2 | x1,x2) / 3 / MSE = 2459.9 /3 /1048.1 = 0.78

The figure for SSR above was found by addition using the numbers obtained from
the stepwise AOV table (see page 278). You could also get this figure by running
the regression model again with Degree = 1, then computing the difference of the

274 Statistix 10 User’s Manual


residual sums of squares of the two models. The number 3 in the equation above
is the degrees of freedom associated with the three terms you’re testing. It’s also
the difference between the degrees of freedom for error of the two models you’re
comparing. Using Probability Functions, you can compute the p-value for the F-
test as F(0.78, 3, 5) = 0.5538. You conclude that no curvature and interaction
effects exist, so the first-order model is adequate.

Polynomial Once the regression analysis is computed and displayed, the Results menu icon,
Regression which appears as a shovel (think “dig-deeper”), on the toolbar is enabled. Click on
Results Menu the icon to display the results menu, as shown below.

Select Coefficient Table from the menu to redisplay the regression coefficient
table. The remaining options are described on the following pages.

Durbin- This option computes the Durbin-Watson test for autocorrelation. Autocorrelation
Watson Test can occur when the data have some natural sequence, i.e., the observations are
ordered in time or space. You should always consider the possibility of
autocorrelation in trend data, such as price or population levels over time. See
page 172 for additional details.

Prediction This option computes the predicted and fitted values of the dependent variable in a
regression for values of the independent variable(s) you specify. The values for the
independent variables can be indicated in two ways. One is simply to enter the list
of desired values, the other is to enter a case number that contains the desired
values for the independent variables. You choose a method by selecting one of the
Specification Method radio buttons.

The Value Method is illustrated in the dialog box on the next page. The values 1
and 20 are entered in the Predictor Values box for the variables Rate and Temp.
The default value of 1.0 is used for the weight, so the resulting prediction intervals
are for a single future observation. If you specify a weight w, the prediction
interval is for the mean of w future observations. You can also specify the percent

Chapter 10, Nonlinear Models 275


coverage for confidence intervals for the fitted value. Enter a value in the C.I.
Percent Coverage box. See Linear Regression in Chapter 8 for an example report.

See the Save Residuals procedure discussed on page 278 for an alternative method
of obtaining fitted values for cases in your data file.

Plots Statistix provides plots that graph fitted curves and surfaces, plus four regression
plots directly from the Results menu. You can also save fitted values and residuals
(page 290) and use the plotting procedures described in Chapter 4 to graph
additional residual plots.

The Fitted Curve plot is available when there is one independent variable in the
model.

The dialog box for the fitted curve includes options to display points representing
the observed values and vertical bars representing the confidence intervals, as

276 Statistix 10 User’s Manual


shown below. The fitted curve for the coffee sales example (page 272) is
displayed below.

The Contour Plot is used to visualize the surface for models with exactly two
independent variables. The lines on the plot represent increasing levels of the
dependent variable in the same way that lines of a contour map show elevation.
See page 301 for an example.

Coplots of scattered data were discussed in Chapter 4. The Surface Coplot


procedure produces conditioning plots of fitted surfaces for models with two or
more independent variables. This concept is illustrated for the Nonlinear
Regression procedure (page 288) and the Loess procedure (page 301).

The Std Resids by Fitted Values plot is a scatter plot with the standardized
residuals on the Y axis and the fitted values on the X axis. You use this plot to
check the assumption of constant variances.

The Std Resids by Variable plot is a scatter plot with the standardized residuals
on the Y axis and the observed values of an independent variable on the X axis.
You use this plot to check the assumption of constant variances.

The Std Resids Time Series plot is a scatter plot with case numbers on the X axis.
You’ll find this plot useful when the data are sequentially ordered.

The three residual plots discussed above include an option to draw a loess
smoothed curve through the points to help visualize patterns. The loess curve
requires two additional parameters. Loess Alpha is a smoothing constant normally
between 0.50 and 1.0. The higher the value, the smoother the curve. Loess

Chapter 10, Nonlinear Models 277


Degree is either Linear, suitable for nearly linear relationships, or Quadratic,
suitable for nonlinear relationships. We recommend you start with alpha = 1.0 and
degree = linear when looking at residual plots. The Loess procedure described on
page 291 offers additional options.

The Normal Probability Plot is used to create a rankit plot of the standardized
residuals to test the assumption of normality (see Chapter 12).

Save Statistix can do extensive residual analysis, which is very important for model
Residuals evaluation. Systematic patterns in the residuals can suggest transformations or
additional variables that would improve the model. Residuals are important for
“outlier” analysis and to identify unusual observations.

Enter the name of a new or existing variable in each box you want to save
computed values. Press the OK button to start the computations. The results
aren’t displayed. The residuals are stored as variables and can be examined using
procedures such as Print, Scatter Plot, Shapiro-Wilk Test, and Runs Test. Each of
the residual options is described in detail on page 176.

Stepwise AOV This option produces a stepwise analysis of variance table for the polynomial
Table regression model. The results for the power cell example (page 273) are presented
in the table on the next page.

The table lists the individual contribution to the sums of squares, the cumulative
sums of squares, cumulative mean squares, cumulative adjusted R2, and Mallows’
Cp statistic (see Miscellaneous Regression Topics in Chapter 8).

278 Statistix 10 User’s Manual


Stepwise Analysis of Variance of Cycles

Individual Cum Cumulative Cumulative Adjusted Mallows


Source SS DF SS MS R² CP P
Constant 325424
Rate 18704.2 1 18704.2 18704.2 0.2318 33.0 2
Temp 34201.5 2 52905.7 26452.8 0.8412 2.3 3
Rate² 1646.0 3 54551.6 18183.9 0.8573 2.8 4
Temp² 284.9 4 54836.6 13709.1 0.8413 4.5 5
Rate*Temp 529.0 5 55365.6 11073.1 0.8271 6.0 6
Residual 5240.4 10 60606.0 60606.0

R² 0.9135 Mean Square Error (MSE) 1048.09


Adjusted R² 0.8271 Standard Deviation 32.3742
AICc 119.16
PRESS 41593

This table is useful for constructing partial F tests to compare nested models.
Figures from the table above were used on page 274 to compare the first-order and
second-order models for the power cell example.

Variance- Select this option to obtain the variance-covariance matrix of the regression
Covariance of coefficient estimates.
Betas
Variance-Covariance Matrix of Coefficients

Dispenser²
Constant Dispenser Dispenser²
Constant 10.2912
Dispenser -5.92E-16 1.10263
Dispenser² -1.47017 1.02E-16 0.36754

This matrix is most commonly used for constructing confidence regions about
coefficient estimates, and for testing hypotheses about functions of the coefficient
estimates. More detail on these topics can be found in Weisberg (1985).

Nonlinear Regression
The Nonlinear Regression procedure fits nonlinear regression models: models
that are not linear in the parameters. Models may include up to 20 independent
variables and 20 parameters.

Statistix searches for the least squares estimates of the parameters using an
iterative function minimization algorithm called the Levenberg-Marquardt-Nash
algorithm. The algorithm does not guarantee that the parameter estimates found
minimize the error sums of squares, and is sensitive to the starting values for the
parameters that you must provide (Statistix will generally provide reasonable
starting values for predefined models).

Chapter 10, Nonlinear Models 279


Specification

There are two methods for Model Specification: Predefined Model and User-
specified Model. Statistix provides a list of 30 predefined models. Each of these
models includes just one independent variable. The drop-down list Predefined
Model provides the model name (although many of these models are known by
more than one name) and the model equation. The model equation uses the letter
Y for the dependent variable and X for the independent variable. You specify the
variables to be used in place of Y and X by moving the corresponding variable
names to the Dependent Variable and Independent Variable boxes. The available
predefined models are listed below.

Name Equation Shape/Use


Asym ptotic Regression Y=a-b(c^X convex/concave
Bleasdale-Nelder Y=(a+b(X)^(-1/c) yield-density
Exponential 2-P Y=a(Exp(b(X) convex/concave
Exponential 3-P Y=a+b(Exp(c(X) convex/concave
Exponential Association Y=a((1-Exp(-b(X)) convex/concave
Farazdaghi-Harris Y=1/(a+b(X^c) yield-density
Freundlich Y=a(X^b convex/concave
Geom etric Y=a(X^(b(X) convex/concave
Gom pertz 3-P Y=a(Exp(-Exp(b-c(X)) sigm oid
Gom pertz 4-P Y=a+b(Exp(-Exp(c-d(X)) sigm oid
Gunary Y=X/(a+b(X+c(Sqrt(X)) m inim a/m axim a
Heat-Capacity Y=a+b(X+c/X^2 m inim a/m axim a
Hoerl Y=a((b^X)((X^c) m inim a/m axim a
Holliday Y=1/(a+b(X+c(X^2) yield-density
Hyperbolic Y=a+b/X convex/concave

280 Statistix 10 User’s Manual


Logarithm Y=a+b(Ln(X) convex/concave
Logistic 3-P Y=a/(1+Exp(b-c(X)) sigm oid
Logistic 4-P Y=a+b/(1+Exp(c-d(X)) sigm oid
Modified Exponential Y=a(Exp(b/X) convex/concave
Modified Freundlich Y=a+b(X^c convex/concave
Monom olecular Growth Y=a((1-Exp(-b((X-c))) convex/concave
Morgan-Mercer-Flodin Y=(a(b+c(X^d)/(b+X^d) sigm oid
Rational Function 3-P Y=(a+b(X)/(1+c(X) convex/concave
Rational Function 4-P Y=(a+b(X)/(1+c(X+d(X^2) m inim a/m axim a
Rational Function 5-P Y=(a+b(X+c(X^2)/(1+d(X+e(X^2) m inim a/m axim a
Reciprocal Logarithm Y=1/(a+b(Ln(X)) convex/concave
Reciprocal Y=1/(a+b(X) yield-density
Richards Y=a/(1+Exp(b-c(X))^(1/d) sigm oid
Saturation Growth Y=a(X/(b+X) convex/concave
W eibull-type Y=a-b(Exp(-c(X^d) sigm oid

If the model you want to use isn’t provided in the list of predefined models, you
need to select User-specified Model and enter the model in the Nonlinear
Regression Model box. You enter only the right-hand side of the equation
(“Dependent Variable =” is assumed) using the actual names of the independent
variables and the parameter names from the Parameters list (you may rename the
parameters). As with using predefined models, you identify the dependent variable
by moving the name of the variable to the Dependent Variable box. The
Independent Variable box isn’t used.

The arithmetic operators available for defining the model are:


^ Exponentiation. A ^ B is A raised to the B-th power.
( Multiplication. A ( B is the product of A and B.
/ Division. A / B is A divided by B.
+ Addition. A + B is the sum of A and B.
! Subtraction or reversal of sign. A ! B is B subtracted from A. The
expression !A, unary negation, reverses the sign of A.

The usual precedence for algebraic expressions is followed (e.g., ^ is evaluated


before ( or /). Parentheses can be used to control the order of evaluation.

There are 13 functions available for use in the model and are listed in the
Functions box:
Abs(x) Absolute value of x.
Arcsin(x) Arcsine of x, x is in radians.
Arctan(x) Arctangent of x, x is in radians.
Cos(x) Cosine of x, x is in radians.
Exp(x) Exponential function; e raised to the power x.
Ln(x) Natural (base e) log of x.
Log(x) Base 10 log of x.
Round(x) Rounds x to the nearest whole integer number.
Sin(x) Sine of x, x is in radians.
Sqr(x) Squares the value of x.
Sqrt(x) Square root of x.
Tan(x) Tangent of x, x is in radians.

Chapter 10, Nonlinear Models 281


Trunc(x) Truncates the decimal portion of a number.

You can type a function name in manually or select one from the list by double-
clicking on the function name.

Nonlinear regression uses an iterative procedure to obtain the least square


estimates of the parameters. The procedure stops when all of the values for the
parameters stop changing. You specify the maximum number of iterations
performed before the procedure “gives up” if it hasn’t converged in the Maximum
Iterations box.

The Parameters grid provides space for you to rename parameters, enter starting
values, and specify lower and upper bounds for the parameters. Initially, the
parameters are named a through t. But you can rename the ones you want to use
with alternative names following these rules: parameter names must start with a
letter; are composed of letters, digits, and the underscore character; and are no
longer than nine characters. When using the user-specified models, it’s important
not to use parameters with names that match any variable name in your data file,
even those variables you’re not using in the model. Renaming the parameters is
one way to avoid this conflict.

You need to provide starting values for the parameters. For some models and data
sets, using 0.0 or 1.0 will be good enough. But for other models or data sets, good
starting values are critical to reaching convergence, or for reaching convergence at
the global minimum. Unfortunately, there isn’t a single strategy for obtaining
starting values for all models. For models where the parameters represent a feature
of the fitted curve, such as intercept, inflection point, or asymptote, viewing a
scatter plot of the raw data may provide some insight. See Ratkowsky (1983) and
Ratkowsky (1990) for additional strategies.

All of the predefined models in Statistix include algorithms for computing starting
values. When you select a model from the list, Statistix will attempt to compute
starting values using your data. If it succeeds, the resulting values will be
displayed in the Parameters grid (the starting values in the dialog box on page 280
were computed by Statistix). If it fails to find starting values (often because the
selected model is a poor fit), then 1.0 will be assigned to each parameter.

You may specify lower and/or upper bounds for some or all of the parameters in
your model. Reasons for doing so include avoiding solutions that don’t make
sense biologically or physically, and avoiding arithmetic operations that are
undefined (such as taking the log of a negative number). We find that leaving the
bounds blank usually works, and that problems with reaching convergence are
better corrected by using different starting values. Viewing the Minimization
History report (see page 286) can provide insight to appropriate boundaries for
models that fail to converge, or converge to an unsuitable solution.

282 Statistix 10 User’s Manual


Data Models may include up to 20 parameters and up to 20 independent variables.
Restrictions Parameter names may not match any variable names in your data.

Example I Our first example illustrates the use of a predefined model. The data are from an
experiment that looked at water content in bean root cells as it relates to distance
from the growing tip (Ratkowsky, 1983). The data are presented below, and are
available in the file Sample Data\Beanroot.sx.

Case Distance Water Case Distance Water


1 0.05 0.13 9 0.85 1.64
2 0.15 0.13 10 0.95 1.83
3 0.25 0.19 11 1.05 2.09
4 0.35 0.34 12 1.15 2.05
5 0.45 0.53 13 1.25 2.13
6 0.55 0.71 14 1.35 2.12
7 0.65 1.06 15 1.45 2.09
8 0.75 1.60

These data, like many processes in biology, engineering, and economics, follow a
sigmoidal or “S-shaped” growth curve. Statistix offers several predefined models
for sigmoidal growth, including Gompertz, Logistic, Richards, Morgan-Mercer-
Flodin, and Weibull type models.

The analysis is specified using the dialog box on page 280. We selected the 3-
parameter logistic model, and starting values for parameters a, b, an c were
calculated for us. You can accept these starting values, or replace them with
different values. The results are presented on the next page.

The report first presents a table of least square estimates for the parameters, and
the standard errors and 95% confidence intervals of the parameter estimates.
Considering whether or not parameter estimates seem reasonable, and whether
confidence intervals aren’t too wide, are first steps to evaluating the
appropriateness of the model you selected.

Below the table of parameter estimates is a message stating that convergence


occurred after 10 iterations. Recall that convergence is not guaranteed, so it’s
important to read this message. If the convergence does not occur, than you should
consider increasing the maximum number of iterations, using different starting
values, or perhaps a different model.

The residual sums of squares, mean square, standard deviation, and degrees of
freedom are reported. AICC is the small-sample version of Akaike’s Information
Criterion, which is useful for comparing models (see Miscellaneous Regression
Topics in Chapter 8).

A table of correlations of the regression parameters is presented. High correlations


(say, absolute value higher than 0.95) may, but not necessarily, indicate a problem
with the precision of the parameter estimates.

Chapter 10, Nonlinear Models 283


Nonlinear Regression

MODEL: Water = a/(1+Exp(b-c*Distance))

Lower Upper
Parameter Estimate Std Error 95% C.I. 95% C.I.
a 2.150890 0.041536 2.060391 2.241389
b 3.957322 0.261877 3.386740 4.527904
c 6.221857 0.446023 5.250057 7.193657

Convergence criterion met after 9 iterations.

Residual SS (SSE) 0.0621


Residual MS (MSE) 5.175E-03
Standard Deviation 0.0719
Degrees of Freedom 12
AICc -70.305
Pseudo R² 0.9935

Parameter Correlations
a b c
a 1.0000
b -0.4302 1.0000
c -0.5990 0.9530 1.0000

Cases Included 15 Missing Cases 0

The result menu for the nonlinear regression offers a fitted curve plot and several
residual plots that are useful for checking goodness of fit of a model, and for
checking the assumptions of constant variance and normality (see page 286).

Example II Our second example illustrates the use of a user-specified model with two
independent variables. The data are from an experiment designed to model a first-
order chemical reaction which converts compound A into compound B (Nash and
Walker-Smith, 1987). The data are presented below, and are available in the file
Sample Data\ChemicalReaction.sx.

Case Remaining Time Temp Case Remaining Time Temp


1 0.980 0.10 100 9 0.225 0.20 200
2 0.983 0.20 100 10 0.167 0.25 200
3 0.955 0.30 100 11 0.566 0.02 300
4 0.979 0.40 100 12 0.317 0.04 300
5 0.993 0.50 100 13 0.034 0.06 300
6 0.626 0.05 200 14 0.016 0.08 300
7 0.544 0.10 200 15 0.066 0.10 300
8 0.455 0.15 200

The fraction of compound A remaining decays exponentially with time at a fixed


temperature: remaining = Exp(-k ( time) where k is the rate constant of the
reaction. But the reaction proceeds faster at higher temperatures according to the
model: k = a ( Exp(-b/ temp). Combining these two models results in the model:
remaining = Exp(-a ( Exp(-b/temp) ( time). The variable Remaining is selected
as the Dependent Variable. The right-hand side of this model is entered manually
in the Nonlinear Regression Model box in the dialog box below.

284 Statistix 10 User’s Manual


The results are presented below.

Nonlinear Regression

MODEL: Remaining = Exp(-a*Exp(-b/Temp)*Time)

Lower Upper
Parameter Estimate Std Error 95% C.I. 95% C.I.
a 813.8719 246.2400 281.9026 1345.841
b 961.0025 68.53386 812.9441 1109.061

Convergence criterion met after 18 iterations.

Residual SS (SSE) 0.0398


Residual MS (MSE) 3.062E-03
Standard Deviation 0.0553
Degrees of Freedom 13
AICc -80.795
Pseudo R² 0.9803

Parameter Correlations
a b
a 1.0000
b 0.9812 1.0000

Cases Included 15 Missing Cases 0

Note that for this example, starting values of 1.0 were adequate to reach
convergence when the final parameter estimates were orders of magnitude larger
(814 and 961).

Chapter 10, Nonlinear Models 285


Nonlinear Once the regression analysis is computed and displayed, the Results menu icon,
Regression which appears as a shovel (think “dig-deeper”), on the toolbar is enabled. Click on
Results Menu the icon to display the results menu, as shown below.

Select Coefficient Table from the menu to redisplay the regression coefficient
table. The remaining options are described on the following pages.

Minimization The minimization history lists the error sums of squares and parameter values for
History each iteration of the function minimization algorithm. The report for the chemical
reaction example discussed on page 284 is displayed below.

MODEL: Remaining = Exp(-a*Exp(-b/Temp)*Time)

Minimization History
Error SS a b
0 4.386205 1.000000 1.000000
1 3.970853 1.964669 26.99106
2 3.258909 4.053327 111.4576
3 2.277429 11.41652 285.7069
4 1.411956 37.14074 487.7149
5 0.653981 104.3922 643.3349
6 0.253357 230.1158 762.1285
7 0.089192 397.7112 838.8738
8 0.048694 564.6073 893.2753
9 0.041068 699.3300 931.7908
10 0.039888 781.3299 952.9641
11 0.039807 810.2198 960.0503
12 0.039806 813.8350 960.9849
13 0.039806 813.8668 961.0012
14 0.039806 813.8719 961.0025
15 0.039806 813.8719 961.0025
16 0.039806 813.8719 961.0025
17 0.039806 813.8719 961.0025
18 0.039806 813.8719 961.0025

The search algorithm stops when all of the parameter values stop changing. The
minimization history can be useful when a model fails to converge, or when it
converges on a local minimum rather than the global minimum.

286 Statistix 10 User’s Manual


Prediction This option computes predicted values and confidence intervals of the dependent
variable in a regression for values of the independent variable(s) you specify. The
values for the independent variables can be indicated in two ways. One is simply
to enter the list of desired values. The other is to enter a case number that contains
the desired values for the independent variables.

An example report for the chemical reaction example (page 284) is displayed
below. See the Save Residuals procedure discussed on page 290 for an alternative
method of obtaining fitted values for cases in your data file.

Predicted/Fitted Values of Remaining

Lower Predicted Bound 0.1316 Lower Fitted Bound 0.2074


Predicted Value 0.2637 Fitted Value 0.2637
Upper Predicted Bound 0.3959 Upper Fitted Bound 0.3201
SE (Predicted Value) 0.0612 SE (Fitted Value) 0.0261

Unusualness (Leverage) 0.2221


Percent Coverage 95.0
Corresponding T 2.16

Case number 9 was used to estimate the regression


Predictor Values: Temp = 200.00, Time = 0.2000

See Linear Regression for a description of the report.

Plots Statistix provides plots of fitted curves and surfaces, plus four regression plots
directly from the Results menu. You can also save fitted values and residuals
(page 290) and use the plotting procedures described in Chapter 4 to graph
additional residual plots.

The Fitted Curve plot is available when there is one independent variable in the
model. The dialog box for the fitted curve includes options to display points
representing the observed values and vertical bars representing the confidence
interval of the curve. The fitted curve for the bean root cells example on page 283
is displayed below.

Chapter 10, Nonlinear Models 287


The Contour Plot is used to visualize the surface for models with exactly two
independent variables. The lines on the plot represent increasing levels of the
dependent variable in the same way that lines of a contour map show elevation.
See page 301 for an example.

Coplots of scattered data were discussed in Chapter 4. The Surface Coplot


procedure produces conditioning plots of fitted surfaces for models with two or
three independent variables.

The dialog box on the next page specifies a coplot for the chemical reaction
example described on page 284. The variable Time has been selected for the X
Axis Variable, and Temp for the Horizontal Given Variable. The experiment was
carried out at three different temperatures, so 3 has been entered into the box for
Horizontal Intervals.

288 Statistix 10 User’s Manual


The resulting coplot is displayed below. The coplot has three dependency panels,
one panel for each interval of the given variable Temp. The panels are ordered
from left to right and bottom to top. The panel corresponding to the first interval
(100 degrees K) is located at the bottom left. The second panel (200 degrees K) is
located at the bottom right.

The Residuals by Fitted Values plot is a scatter plot with the residuals on the Y
axis and the fitted values on the X axis. You use this plot to check the assumption

Chapter 10, Nonlinear Models 289


of constant variances.

The Residuals by Variable plot is a scatter plot with the residuals on the Y axis
and the observed values of an independent variable on the X axis. You use this
plot to check the assumption of constant variances.

The two residual plots discussed above include an option to draw a loess smoothed
curve through the points to help visualize patterns. The loess curve requires two
additional parameters. Loess Alpha is a smoothing constant normally between
0.50 and 1.0. The higher the value, the smoother the curve. Loess Degree is either
Linear, suitable for nearly linear relationships, or Quadratic, suitable for nonlinear
relationships. We recommend you start with alpha = 1.0 and degree = linear for
residual plots. The Loess procedure described on page 291 offers additional
options.

The Residuals Time Series plot is a scatter plot with case numbers on the X axis.
You’ll find this plot useful when the data are sequentially ordered.

The Normal Probability Plot is used to create a rankit plot of the standardized
residuals to test the assumption of normality (see Chapter 12).

Save Statistix can do extensive residual analysis, which is very important for model
Residuals evaluation. Systematic patterns in the residuals can suggest transformations or a
different model.

Enter the name of a new or existing variable in each box for which you want to
save computed values. Press the OK button to start the computations. The results
are not displayed. The residuals are stored as variables and can be examined using
procedures such as Print, Scatter Plot, Shapiro-Wilk Test, and Runs Test. Each of
the residual options is described on page 176.

290 Statistix 10 User’s Manual


Variance- Select this option to obtain the variance-covariance matrix of the nonlinear
Covariance of regression parameter estimates. The report for the bean root cell example
Betas discussed on page 283 is displayed below.

Variance-Covariance Matrix of Parameters

a b c
a 1.725E-03
b -4.679E-03 0.06858
c -0.01110 0.11131 0.19894

The diagonal elements of the matrix are the variances of the regression
coefficients. The off-diagonal values are the covariances. This matrix is most
commonly used for constructing confidence regions about coefficient estimates,
and for testing hypotheses about functions of the coefficient estimates. See Neter
et al. (1989) for an example.

Computational Computations follow Nash and Walker-Smith (1987), and Nash (1990). See also
Notes Neter et al. (1989). Numerical derivatives are used (Nash and Walker-Smith,
1987).

Loess
The Loess procedure fits smoothed curves and surfaces to multivariate scattered
data using locally weighted regression. In a local regression model, the dependent
and independent variables are related by
yi = g(xi) + εi
where g is the regression surface and εi are random errors. The idea is that in a
given neighborhood of a point in the x space, the surface can be approximated
using either linear or quadratic polynomials. For any xi, the fitted value for yi is
computed using only those cases where the values of the independent variables are
in the neighborhood of xi and those closest to xi are given greater weight.

Specification The dialog box appears on the next page. Select the Dependent Variable and one
or more Independent Variables.

It’s assumed that the variances are independent random variables with mean zero.
You may choose from two Families of probability distributions: Normal
(Gaussian) or Symmetric. Selecting symmetric allows for situations where the
tails of the error distribution are stretched out compared to normal, and leads to
robust methods of estimation (Cleveland et al., 1992).

Chapter 10, Nonlinear Models 291


There are two ways to specify the properties of the variances of εi. The first is that
they are a constant, σ2. The second is that wi εi has constant variance σ2, where the
a priori weights, wi are positive and known. In the latter case, move the name of
the variable containing the weights to the Weight Variable box.

Specify a value for the neighborhood parameter, Alpha. The value must be greater
than zero, and is usually in the range 0.25 to 1.0. As alpha increases, the fitted
function or surface becomes smoother. Using too small a value leads to surplus of
fit (the curve or surface is too sensitive to individual data points). Using too large
a value leads to lack of fit (residual plots reveal patterns). You make a guess for
the initial value for alpha, and then look at residual plots to determine if the value
should be decreased or increased.

To determine neighborhood weights, we need to establish the distance between


two observations. For models with one independent variable, the units of the raw
data are used. When there are two or more variables, the units of the variables may
differ, in which case you want to have values normalized (dividing them by their
trimmed standard deviations). There are some cases where it makes sense not to
normalize the data, such as points in space. Select whether or not to have
distances normalized by checking or unchecking the Normalize check box.

The local regression can be fitted using either linear or quadratic polynomials.
Specify your choice by selecting one of the options for Degree. Suppose you have
two independent variables, u and v. Using linear, the fit is made using three

292 Statistix 10 User’s Manual


monomials: a constant, u, and v. Using quadratic, the fits is made using five
monomials: a constant, u, v, uv, u2, and v2.

The Independent Variables Options grid allows you to make modifications to the
design when you have two or more independent variables in the model. For
quadratic models, entering Yes in the Drop Square column removes the squared
monomial for the variable in that row. For example, if you have independent
variables u and v, and you drop the square for variable u, then the fit uses four
monomials: a constant, u, v, uv, and v2.

When you have two or more independent variables, you can have the surface be
Conditionally Parametric in any proper subset of the variables. For example,
suppose you have independent variables u and v and you’ve selected quadratic for
degree. If you specify that the surface be conditionally parametric in u, then given
v, the surface is quadratic in u. The general form of the surface is β0(v) + β1(v)u +
β2(v)u2.

Data You may include up to five numeric independent variables. Factor variables are
Restrictions not allowed.

Example I This first example has one independent variable. The data are from an industrial
experiment that studied exhaust from an experimental engine using gasoline for
fuel (Cleveland et al., 1992). The data are displayed below, and are available in
the file Sample Data\Gasoline.sx.

Case NOx E Case NOx E


1 4.818 0.831 12 5.344 0.928
2 2.849 1.045 13 3.834 0.767
3 3.275 1.021 14 1.990 0.701
4 4.691 0.970 15 5.199 0.807
5 4.255 0.825 16 5.283 0.902
6 5.064 0.891 17 3.752 0.997
7 2.118 0.710 18 0.537 1.224
8 4.602 0.801 19 1.640 1.089
9 2.286 1.074 20 5.055 0.973
10 0.970 1.148 21 4.937 0.980
11 3.965 1.000 22 1.561 0.665

The dependent variable is NOx, the concentration of nitric oxide plus nitrogen
dioxide in the exhaust. The independent variable is E, the equivalence ratio (a
measure of the richness of air and fuel). Small data sets like this one won’t allow
us to use a small value for alpha (neighborhoods would contain too few points), so
we’ll start with 0.65. A preliminary scatter plot of the data (not shown) shows an
obviously nonlinear relationship, so we’ll select quadratic for degree. The model
is specified using the dialog box on the preceding page. The results are displayed
below.

Chapter 10, Nonlinear Models 293


Loess - Local Regression Models

Model Design
Dependent Variable: NOx
Independent Variable: E
Alpha: 0.65
Error Dist. Family: Normal
Degree: Quadratic

Model Statistics
Equivalent Number of Parameters 5.7
Residual Standard Error 0.3404
R² 0.9643
Delta 1 15.5
Delta 2 15.2
Rho (look-up DF) 15.8

Cases Included 22 Missing Cases 0

The report displays a number of model statistics. The equivalent number of


parameters is a measure of the amount of smoothing, and is analogous to the
number of parameters in a parametric fit. The residual standard error is the
estimate of σ. R2 expresses the proportion of the total sums of squares explained
by the model. Rho is the value for degrees of freedom used to construct
confidence intervals.

Before accepting these results, it’s important to look at the diagnostic plots to
assess the goodness of fit. First let’s look at the fitted curve displayed below. The
dialog box for the Fitted Curve procedure on the results menu has options to plot
the observations and to display confidence interval bars. The curve seems to fit the
data points well.

Next, we’ll use the Residuals by Variable Plot procedure available on the results
menu to check for lack of fit. We’ll also use the Residuals by Fitted Values plot to
check the distributional specifications for the error terms. The scale of the
residuals shouldn’t depend on the level of the surface.

294 Statistix 10 User’s Manual


To help identify dependencies, a loess curve (alpha = 1.0, degree = quadratic) has
been added to each plot. No effect appears to be present in either of these plots, so
we conclude that a value of 0.65 for alpha hasn’t introduced lack of fit. To check
for surplus of fit, we can try increasing the value of alpha (refer to the dialog box
on page 292, 292) from 0.65 to 1.0. The equivalent number of parameters is
reduced to 3.5 and the residual standard error is increased to 0.5150 for the new
model. The diagnostic residual plots for the second fit are displayed below.

These plots show strong signals in the residuals. The plot on the left shows a
dependency of the residuals on the independent variable E. The plot on the right
shows a dependency of the residuals on the level of the surface. The value of 1.0
for alpha is too high: It introduces lack of fit. So let’s return to the first fit using the
value of 0.65 for alpha.

To check the assumption of a normal distribution of the errors, use the Normal
Probability Plot available on the results menu. The plot, displayed below, suggests
that the normal specification is justified.

Chapter 10, Nonlinear Models 295


Example II The experiment described in example I above was also performed using ethanol for
fuel. This experiment used two independent variables: E as described above, and
C, the compression ratio. The dependent variable is again NOx, the combined
concentration of nitric oxide and nitrous dioxide in the exhaust. The data are
available in the file Sample Data\Ethanol.sx.

Several models were considered for this data set (Cleveland et al., 1992), and we’ll
look at three of them. The final model is specified in the dialog box below.

The model statistics for candidate models are given below.

Model Design Model 1 Model 2 Model 3


Dependent Variable: NOx NOx NOx
Independent Variables: C E C E C E
Alpha: 0.25 0.25 0.5
Error Dist. Family: Normal Normal Normal
Degree: Quadratic Quadratic Quadratic
Normalize: Yes Yes Yes
Drop Square: C C C
Conditionally Parametric: C C

Model Statistics
Equivalent Number of Parameters 20.0 20.2 10.3
Residual Standard Error 0.1728 0.1770 0.1841
R-Squared 0.9840 0.9822 0.9770
Rho (look-up DF) 62.8 64.5 76.3

Cases Included 88 Missing Cases 0

A preliminary scatter plot matrix of the data (not shown) shows a nonlinear
relationship between NOx and E, but no apparent relationship between NOx and

296 Statistix 10 User’s Manual


C. However, a coplot of NOx on C, given E (see the Coplot example in Chapter 4)
reveals an interaction between C and E. For low levels of E, NOx increases with
C, and for high levels of E, NOx is a constant function of C. These observations
lead us to specify a quadratic fit because of the nonlinear relationship between
NOx and E, but to drop the square for the variable C since its relationship to NOx
is near linear, given E.

Removing C2 from the fitting variables leaves us with a constant, E, C, EC, and E2.
Cleveland found that a value of 0.5 for alpha produced residual plots that indicated
lack of fit, so let’s start with alpha = 0.25. This leads us to model 1 in the table on
the preceding page. An examination of the residual plots (not shown) shows that
model 1 doesn’t introduce lack of fit.

However, a coplot of the fitted surface for model 1 indicates that the surface is
very smooth with respect to variable C, and the relationship between NOx an C is
nearly linear, given E. Let’s try making C conditionally parametric, this will result
in a fit that is linear in C given E (model 2 in the table). Making this change led to
a modest reduction in the equivalent number of parameters, and a small increase in
the residual standard error.

The big advantage of making the surface conditionally parametric in C is that we


can now increase alpha without introducing lack of fit. Model 3 in the table on the
preceding page uses alpha = 0.5, and as a result the equivalent number of
parameters was cut in half with a modest increase in the residual standard error,
making for a more parsimonious fit.

As with loess models with one independent variable, evaluation of models with
two independent variables begin with plots of residuals against each independent
variable. But you should also look at residual coplots for evidence of lack of fit
that may be hidden in the marginal plots. See page 303 for an example.

The fitted surface of a loess model with two or three independent variables can be
evaluated using a series of surface coplots. See page 301 for an example.

Loess Results
Menu

Chapter 10, Nonlinear Models 297


Once the loess analysis is computed and displayed, the Results menu icon, which
appears as a shovel (think “dig-deeper”), on the toolbar is enabled. Click on the
icon to display the results menu.

Select Coefficient Table from the menu to redisplay the model statistics. The
remaining options are discussed on the following pages.

Analysis of The analysis of variance procedure is used to compute an F test to test a null model
Variance against an alternative model. The null model must be nested in the alternative. A
null model is nested in the alternative model if the following conditions are true:
1) alphanull $ alphaalt
2) degreenull # degreealt
3) If the square of a variable is dropped in the alternative model, then is must be
dropped in the null model.
4) The models must have the same variables with the following exception: A
conditionally parametric variable in the alternative model may be removed from
the null model.

The alternative model is the model specified in the main loess dialog box (see page
296). The null model is specified in the lower panel of the dialog box below.

To specify a null model, you must do one or more of the following:


1) Increase alpha.

298 Statistix 10 User’s Manual


2) Decrease degree from quadratic to linear.
3) Drop a square of a variable not dropped in the alternative.
4) Change a variable to conditionally parametric.
5) Remove a conditionally parametric variable.

The dialog on the preceding page specifies a comparison of models 1 and 3 in


example II (refer to the table of model statistics on page 296). The null model
(model 3) has a greater value for alpha, and the variable C is conditionally
parametric. The F test is analogous to that of parametric models. The results for
our example are displayed below.

Loess - Analysis of Variance

Null Model Alternative Model


Dependent Variable: NOx NOx
Independent Variables: C E C E
Alpha: 0.5 0.25
Normalize: Yes Yes
Degree: Quadratic Quadratic
Error Dist. Family: Normal Normal
Drop Square: C C
Conditionally Param: C

Model ENP RSS F P


Null 10.3 2.57278 1.64 0.0921
Alternative 20.0 1.79011

Cases Included 88 Missing Cases 0

Model 3 (the null model) is more attractive because the equivalent number of
parameters is so much lower than model 1 (the alternative model). The F test is
not significant (p = 0.0921), so it’s fair to conclude that the null model is capable
of capturing any effect that the alternative model can capture. Had the F test been
significant (say, p < 0.05), you would have concluded that the null model
introduced distortion.

Prediction This option computes fitted values and confidence intervals of the dependent
variable for the loess model for values of the independent variable(s) you specify.
An example report for the ethanol example (page 296) is displayed below.

Fitted Values of NOx

Lower Fitted Bound 2.9798


Fitted Value 3.0691
Upper Fitted Bound 3.1585
SE (Fitted Value) 0.0449

Percent Coverage 95.0


Corresponding T 1.99

Predictor Values: C = 12.000, E = 1.0000

Chapter 10, Nonlinear Models 299


The values for the independent variables can be indicated in two ways. One is to
enter the list of desired values. The other is to enter a case number that contains
the desired values for the independent variables.

See the Save Residuals procedure discussed on page 304 for an alternative method
of obtaining fitted values for cases in your data file.

Plots Statistix provides plots of fitted curves and surfaces, plus four regression plots
directly from the Results menu. You can also save fitted values and residuals
(page 304) and use the plotting procedures described in Chapter 4 to graph
additional residual plots.

The Fitted Curve plot is available when there is one independent variable in the
model. The dialog box used to produce the fitted curve for the gasoline example
on page 294 is displayed below. It includes options to display points representing
the observed values and vertical bars or curves to represent the confidence interval
of the fitted curve.

The Contour Plot is used to visualize the surface for models with exactly two
independent variables. The lines on the plot represent increasing levels of the
dependent variable in the same way that lines of a contour map show elevation. A
dialog box used for the ethanol example is displayed on the next page.

You may specify values for low, high, and step for the X, Y, and Z axises to
control the scales. Contour plots are drawn using major (black) and minor (gray)
contour lines. You can control the number of major contour lines by specifying
low, high, and step for the Z axis (NOx in this example). Enter a value in the
Minor Contour Lines box to specify the number of minor lines to appear between
each pair of major lines.

300 Statistix 10 User’s Manual


The contour plot for the ethanol example described on page 296 is displayed
below.

The Surface Coplot procedure produces conditioning plots of fitted surfaces for
models with two or three independent variables. The dialog box on the next page
specifies a coplot for the ethanol example described on page 296.

Chapter 10, Nonlinear Models 301


The variable C (compression ratio) has been selected for the X Axis Variable, and
E (equivalence ratio) for the Horizontal Given Variable. A value of 16 is entered
in the Horizontal Intervals box which will produce 16 dependency panels. 99%
confidence interval bars have been specified. The resulting coplot is displayed
below.

The coplot has 16 dependency panels, one panel for each interval of the given
variable E. The panels are ordered from left to right and bottom to top. The panel
corresponding to the first interval of E is located at the bottom left. The sixteenth
panel representing the highest interval of E is located at the top right. A second

302 Statistix 10 User’s Manual


coplot can be graphed by reversing the roles of C and E.

For models with three independent variables, you specify a vertical given variable
in addition to the horizontal given variable. By doing so, you’ll produce a matrix
of dependency panels, one column of panels for each interval of the horizontal
given variable, and one row for each interval of the vertical given panel. See the
section on Coplots in Chapter 4 for an example coplot of scattered data that uses a
vertical given variable that illustrates the concept.

The Residuals by Fitted Values plot is a scatter plot with the residuals on the Y
axis and the fitted values on the X axis. You use this plot to check the assumption
of constant variances. See page 295 for examples.

The Residuals by Variable plot is a scatter plot with the residuals on the Y axis
and the observed values of an independent variable on the X axis. You use this
plot to check the assumption of constant variances. You need to select one of the
independent variables for the plot, and you should examine the plot for each
variable. See page 295 for examples.

For models with two or three independent variables, you should also look at
Residual Coplots for evidence of lack of fit that may not be apparent in the
Residuals by Variable plots. The dialog box for coplots of residuals is similar to
that of the dialog box for coplots of scattered data discussed in Chapter 4. But
rather that plotting the dependent variable on the Y axis, the values of the residuals
are plotted.

One of the examples of coplots of scattered data discussed in Chapter 4 used data
from an environmental study where the effects of radiation, temperature, and wind
speed on the air pollutant ozone were studied (Sample Data\Air.sx). After fitting a
loess model using the variable ozone as the dependent variable, the coplot of
residuals displayed on the next page was obtained.

The three residual plots discussed above include an option to draw a loess
smoothed curve through the points to help visualize patterns. To include the
curve, check the Loess Smooth checkbox and enter values for Loess Alpha and
Loess Degree.

The Normal Probability Plot is used to create a rankit plot of the residuals to test
the assumption of normality (see Chapter 12).

Chapter 10, Nonlinear Models 303


Save Use the Save Residuals procedure to save the fitted values, residuals, and lower
Residuals and upper limits of the confidence interval of the fitted surface.

Enter the name of a new or existing variable in each box for which you want to
save computed values. If you’re saving the confidence interval, specify the C.I.
Percent Coverage. Press the OK button to start the computations. The results
aren’t displayed. The residuals are stored as variables and can be examined using
procedures such as Print, Scatter Plot, and Runs Test.

Computational Computations follow Cleveland et al. (1992) and Cleveland (2007). Statistix
Notes performs five iterations for robust estimation when the symmetric error
distribution family is selected.

304 Statistix 10 User’s Manual


C H A P T E R

11
Association Tests
Statistix offers many association tests that can be used to examine the similarity or
association among two or more variables.

The Multinomial Test is a goodness-of-test that tests how well frequencies of


mutually exclusive categories fit a hypothesized distribution.

The Chi-Square Test computes the traditional chi-square goodness-of-fit test for
two-way tables. Two hypotheses can be examined with this test: the hypothesis of
independence, and the hypothesis of homogeneity.

The Kolmogorov-Smirnov Test is useful for comparing the similarity of the


distributions of samples from two populations. If there is an intrinsic ordering to
the categories, the Kolmogorov-Smirnov test is usually better than the chi-square
test because it can exploit the information in the ordering while the chi-square
analysis cannot.

The McNemar’s Symmetry Test is a goodness-of-fit test that’s often useful for
measuring change. It’s used to analyze square contingency tables; often the rows
represent classifications before some event, while the columns represent the same
classes after some event. Individuals may be in one class before the event but in
another class after the event. However, if the table is symmetric about the diagonal
from the upper left to the lower right, there will be no net shift in the row and
column proportions before and after. McNemar’s test examines whether the table
is in fact symmetric.

The Two by Two Tables procedure computes a variety of tests of association for
two by two contingency tables. A typical example of a two by two table is where a
number of individuals are cross-classified by two dichotomous variables, such as
treated-not treated and survived-died. The tests include Fisher’s exact test,
Pearson chi-square, log odds ratio, and others, along with standard errors.

The Log-Linear Models procedure is a powerful tool for analyzing discrete


multidimensional categorical data. Log-linear models are the discrete data analogs
to analysis of variance. If a set of discrete data has more than two classifying
variables, you may be tempted to analyze such data as a series of two-way tables
with traditional chi-square tests. However, the danger of such an approach is that
collapsing the data over some categorical variables results in these variables
becoming confounded with the remaining two categorical variables. Log-linear
models allows all dimensions of multidimensional contingency tables to be treated

Chapter 11, Association Tests 305


simultaneously and so avoids such potential confounding.

The Correlations procedure measures the degree of linear association between


two variables. The Partial Correlations procedure allows you to examine the
degree of linear association between two variables after the effect of other
variables have been “adjusted out”. These procedures also appear on the Linear
Models menu and are discussed in Chapter 8.

The Spearman Rank Correlations procedure produces nonparametric correlation


coefficients that are suitable for examining the degree of association when the
samples violate the assumption of bivariate normality.

Multinomial Test
The Multinomial Test is a goodness-of-test that tests how well frequencies of
mutually exclusive categories fit a hypothesized distribution. For example, it can
be used to test whether or not a sample of 100 rolls of a die support the hypothesis
that each number is equally likely to be rolled. The large-sample chi-square
approximation is used for this test.

Specification

The test requires two variables. The Hypothesized Proportions Variable contains
the list of hypothesized proportions. The values can be entered as proportions that
sum to 1, or on any arbitrary scale such that the relative values represent the
hypothesized proportions. The Observed Frequencies Variable contains the
corresponding list of observed frequencies.

Example In crosses between two types of maize, four distinct types of plants were found in
the second generation (Snedecor and Cochran, 1980): green, golden, green-striped,
and golden-green-striped. According to a simple type of Mendelian inheritance,
the probabilities of obtaining these four types of plants are 9/16, 3/16, 3/16, and
1/16. The frequencies tabulated for a sample of 1301 plants, and the expected
ratios, are listed below.

306 Statistix 10 User’s Manual


Type Observed Expected
green 773 9
golden 231 3
green-striped 238 3
golden-green-striped 59 1

The analysis is specified using the dialog box on the preceding page. The results
appear below.

Multinomial Test

Hypothesized Proportions Variable: Expected


Observed Frequencies Variable: Observed

Hypothesized Observed Expected Chi-Square


Category Proportion Frequency Frequency Contribution
1 0.562500 773 731.81 2.32
2 0.187500 231 243.94 0.69
3 0.187500 238 243.94 0.14
4 0.062500 59 81.31 6.12

Overall Chi-Square 9.27


P-Value 0.0259
Degrees of Freedom 3

The p-value for the overall chi-square test is 0.0259, so we reject the null
hypothesis that the 9:3:3:1 ratio is correct. We see from the chi-square
contribution column that category 4 gives the largest contribution to chi-square.
The original researcher noted that the golden-green-striped plants were not
vigorous due to their chlorophyll abnormality.

Chi-Square Test
The Chi-Square Test procedure is used to analyze two-dimensional tables of
discrete data. Two hypotheses can be examined; the hypothesis of independence
examines whether the row-classifying variable acts independently of the
column-classifying variable, and the hypothesis of homogeneity tests whether the
relative frequency distributions for each of the rows or columns are the same. The
appropriate hypothesis choice depends on how the sampling was performed. The
calculations involved are identical for the two tests.

Specification The analysis can be specified in two ways, depending on how you choose to enter
the data. If you enter the data in columns using two categorical variables to
identify the rows and columns, select the Categorical method and identify the two
classifying variables and, optionally, a dependent variable containing counts. You
can also enter the data in as a two- dimensional table, with variables identifying the
columns and cases identifying the rows. This is called the Table method.

Chapter 11, Association Tests 307


Select either the Categorical or the Table method that fits the way your data have
been entered. Using the Categorical method, select the classifying variable that
identifies the rows of the contingency table and move it to the Row Variable box.
Select the variable that identifies the columns and move it to the Column Variable
box. If each case represents one observation, don’t select a Counts Variable. If
the data are summarized, select the variable that contains the counts for each case.

Using the Table method, select the variables that will represent the columns of the
contingency table and move them to the Table Variables box.

Data No more than 500 row and column categories are allowed. Missing values are not
Restrictions permitted. The count data must be nonnegative integer values.

Example We use the data from Table 8.2-1 in Bishop, Fienberg, and Holland (1975) for our
example. The data are the results of vision tests for 7,477 women. The variables
Left and Right are the scores for the left and right eyes, respectively. Each eye was
assigned a score 1, 2, 3, or 4. The counts that fall in each of the cells of the
contingency table are in the variable Counts. The data are presented on the next
page, an are stored in the file Sample Data\Vision.sx.

308 Statistix 10 User’s Manual


Case Counts Left Right
1 1520 1 1
2 234 1 2
3 117 1 3
4 36 1 4
5 266 2 1
6 1512 2 2
7 362 2 3
8 82 2 4
9 124 3 1
10 432 3 2
11 1772 3 3
12 179 3 4
13 66 4 1
14 78 4 2
15 205 4 3
16 492 4 4

The analysis is specified on the preceding page. The results are displayed below.

Chi-Square Test for Heterogeneity or Independence


for Counts = Left Right

Right
Left 1 2 3 4 Total

1 Observed 1520 234 117 36 1907


Expected 503.98 575.39 626.40 201.23
Cell Chi-Sq 2048.32 202.55 414.25 135.67

2 Observed 266 1512 362 82 2222


Expected 587.22 670.43 729.87 234.47
Cell Chi-Sq 175.72 1056.38 185.41 99.15

3 Observed 124 432 1772 179 2507


Expected 662.54 756.43 823.48 264.55
Cell Chi-Sq 437.75 139.14 1092.53 27.66

4 Observed 66 78 205 492 841


Expected 222.26 253.75 276.25 88.75
Cell Chi-Sq 109.86 121.73 18.38 1832.37

Total 1976 2256 2456 789 7477

Overall Chi-Square 8096.88


P-value 0.0000
Degrees of Freedom 9

In each cell, the original observed value, the value expected under the hypothesis
of independence or homogeneity, and the cell contribution to the overall chi-square
are displayed. In this example, the hypothesis of independence is appropriate. If
either the row or the column totals had been predetermined before the sample was
taken, the hypothesis of homogeneity would have been appropriate. The
calculations are the same for either hypothesis.

In our example, the large differences between the expected values and the observed
values indicate that the model of independence is not an acceptable model for this
data set (p-value = 0.0000). This seems quite reasonable; one might reasonably
expect the vision score for one eye to have a positive association with that for the

Chapter 11, Association Tests 309


other eye.

It’s more convenient in some situations to store the column categories as separate
variables. For example, the data listed on the preceding page can rearranged using
the Table format.

Case Right1 Right2 Right3 Right4


1 1520 234 117 36
2 266 1512 362 82
3 124 432 1772 179
4 66 78 205 492

The variables Right1, Right2, Right3, and Right4 contain the data for columns 1
through 4 of the table (right eye scores 1 through 4). Each variable has four cases,
and the order of the cases represent rows 1 through 4 of the table (left eye scores 1
through 4). The analysis for these data is specified by moving the four variables to
the Table Variables box.

The most common problem in applying the chi-square test is that it becomes
unreliable when numerous expected cell values are near zero. Snedecor and
Cochran (1980, p. 77) give the following general rules:
1) No expected values should be less than one.
2) Two expected values may be close to one if most of the other expected
values exceed five.
3) Classes with expectations less than one should be combined to meet 1) and
2).

Statistix doesn’t use the correction for continuity; we find the arguments given by
Fienberg (1980) for not using the correction compelling.

Fisher’s exact test is reported for two by two tables (two categories in both the
rows and columns). The Two by Two Tables procedure reports additional
statistics.

Multiple The results menu for the chi-square test includes a procedure to compute multiple
Comparisons comparisons for proportions in 2 × C contingency tables. The test is equivalent to
for Proportions Tukey’s HSD test for multiple comparison of means in the one-way analysis of
variance (Zar, 1999).

Consider the 2 × 4 contingency table from Zar.

Events Group1 Group2 Group3 Group4


Successes 32 43 16 9
Fails 55 65 64 16

We’re interested to know if the proportion of successes are different for various
groups. The chi-square test is significant (p = 0.0295). The multiple comparison
procedure transforms the proportions using the angular transformation (also called

310 Statistix 10 User’s Manual


the arcsin-square root transformation) to spread out proportions near 0 or 1 to
increase the variance. It produces a triangular matrix of differences between pairs
of transformed proportions. The differences that are significantly different are
marked with asterisks.

Multiple Comparisons for Proportions

P Angle(P) Group1 Group2 Group3


Group1 0.3678 37.42
Group2 0.3981 39.18 1.75
Group3 0.2000 26.83 10.60 12.35*
Group4 0.3600 37.18 0.24 1.99 10.36

Alpha 0.05
Critical Q Value 3.633

The column labeled P lists the proportions of values found in the first row of data
(the first column of data for R × 2 tables). The column labeled Angle(P) contains
the transformed proportions. The values is the body of the table are difference
between the transformed proportions. Only one pair of proportions are
significantly different.

Kolmogorov-Smirnov Test
The Kolmogorov-Smirnov Test procedure examines whether two samples have
the same distribution. You must order the categories within the samples. The test,
which is also known as the Smirnov test, is sensitive to any differences between
the distributions, including differences in means and variances. It’s generally
preferable to a chi-square test because it exploits the information in the ordering of
the categories.

Specification The sample counts for one distribution are in one variable, and the counts for the
other distribution are in a second variable. It’s assumed that the ordering of the
cases in the two variables reflects the ordering of the categories.

Select the names of the two variables containing the samples. The order of the

Chapter 11, Association Tests 311


variables isn’t important, except that the signs of the resulting one-tailed statistics
are reversed.

Data The data must be nonnegative whole numbers and can’t exceed 99,999. There
Restrictions must be at least five cases.

Example The data for our example are fabricated. Suppose you want to examine whether
students’ test scores are the same under two teaching methods. The variables
Method1 and Method2 represent the number of students receiving a particular
grade. There are 20 questions on the test, so there are 20 categories corresponding
to the total number of possible scores. The data are listed in the table on the next
page.

Case Method1 Method2


1 1 7
2 0 5
3 0 7
4 1 5
5 1 6
6 0 8
7 6 6
8 12 7
9 18 7
10 30 4
11 24 5
12 20 7
13 21 8
14 8 9
15 2 5
16 1 5
17 0 6
18 0 7
19 0 5
20 1 7

The analysis is specified on the preceding page. The results are displayed onthe
next page.

The plot at the top of the report displays the sample cumulative distribution
functions. The Kolmogorov-Smirnov two-sample test is based on the maximum
difference between the cumulative distribution functions. The location of the
greatest negative difference between the two curves is shown by the column of
“!”s. The greatest positive difference is located at the column of “+”s. The two-
tailed test, which tests the hypothesis that the two curves are different, is based on
the difference with the greatest absolute value, 0.29 in the example. The very small
p-value indicates that the distributions are indeed different. One-tailed tests are
also computed to test (1) whether the largest positive difference is greater than
expected by chance if the two distributions are identical, and (2) whether the
smallest negative difference is smaller than expected.

312 Statistix 10 User’s Manual


Two Sample Kolmogorov-Smirnov (Smirnov) Test

1.0 * ------------------===
* --- ======
* + ===
* --- + ======
* + ===
* --- ===
* ===
* ======
* ======
* ===
* === - ---
* ====== -
* === - ---
* ====== ---
0.0 * ------------------
.))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))
--- Method1 === Method2

Sample size for Method1 146


Sample size for Method2 126

Hypothesis: Method1 <> Method2

Two-tailed Kolmogorov-Smirnov Statistic 0.29


P-Value (Smirnov's Chi-Square Approx.) 0.0000

Hypothesis: Method1 < Method2


One-tailed Kolmogorov-Smirnov Statistic -0.29
P-Value (Smirnov's Chi-Square Approx.) 0.0000

Hypothesis: Method1 > Method2


One-tailed Kolmogorov-Smirnov Statistic 0.27
P-Value (Smirnov's Chi-Square Approx.) 0.0001

Note that in this example both one-tailed tests are significant. The p-values are
based on Smirnov’s (1939) approximation using the chi-square distribution.
P-values are computed only if the sum of the observations in the variables both
exceed 15.

See Lehmann (1975) and Hollander and Wolfe (1973) for more detail.

McNemar’s Symmetry Test


The McNemar’s Symmetry Test procedure tests whether a square contingency
table is symmetric about the diagonal running from the upper left to the lower
right. It is actually a generalized version of McNemar’s test developed by Bowker
(1948).

Specification The analysis can be specified in two ways, depending on how you choose to enter
the data. If you enter the data in columns using two categorical variables to
identify the rows and columns, select the Categorical method and identify the two

Chapter 11, Association Tests 313


classifying variables and, optionally, a dependent variable containing counts. You
can also enter the data in as a two- dimensional table, with variables identifying the
columns and cases identifying the rows. This is called the Table method.

Select either the Categorical or the Table method that fits the way your data have
been entered. Using the Categorical method, select the classifying variable that
identifies the rows of the contingency table and move it to the Row Variable box.
Select the variable that identifies the columns and move it to the Column Variable
box. If each case represents one observation, don’t select a Counts Variable. If
the data are summarized, select the variable that contains the counts for each case.

Using the Table method, select the variables that will represent the columns of the
contingency table and move them to the Table Variables box.

Data The contingency table must be square, i.e., the number of rows and columns must
Restrictions be equal. No more than 500 row and column categories are allowed. Missing
values aren’t permitted. The count data should be nonnegative integer values.

Example We use the data from Table 8.2-1 in Bishop, Fienberg, and Holland (1975) for our
example. The data are the results of vision tests for 7,477 women. Each eye was
assigned a score 1, 2, 3, or 4.

Case RIGHT1 RIGHT2 RIGHT3 RIGHT4


1 1520 234 117 36
2 266 1512 362 82
3 124 432 1772 179
4 66 78 205 492

314 Statistix 10 User’s Manual


The data were entered in the Table format (see the Chi-Square Test for an example
of the Categorical method). Variables represent right eye scores 1 through 4, cases
represent left eye scores 1 through 4. 1520 women were assigned the score of 1 in
both eyes.

The analysis of the vision test data is specified on the preceding page. The results
are presented below.

McNemar's Symmetry Test

Variable
Case Right1 Right2 Right3 Right4 Total

1 Observed 1520 234 117 36 1907


Expected 1520.00 250.00 120.50 51.00
Cell Chi-Sq 0.00 1.02 0.10 4.41

2 Observed 266 1512 362 82 2222


Expected 250.00 1512.00 397.00 80.00
Cell Chi-Sq 1.02 0.00 3.09 0.05

3 Observed 124 432 1772 179 2507


Expected 120.50 397.00 1772.00 192.00
Cell Chi-Sq 0.10 3.09 0.00 0.88

4 Observed 66 78 205 492 841


Expected 51.00 80.00 192.00 492.00
Cell Chi-Sq 4.41 0.05 0.88 0.00

Total 1976 2256 2456 789 7477

Overall Chi-Square 19.11


P-value 0.0040
Degrees of Freedom 6

In each cell, the original observation, the value expected under the hypothesis of
symmetry, and the cell contribution to the overall chi-square are displayed.
Clearly, this model fits the data better than the model of independence examined in
Chi-Square Test, but there’s still a significant lack of fit (p-value = 0.0040). By
examining the cell chi-squares, you’ll see that the left and right pairs on the
diagonal running from the lower left to the upper right are primarily responsible for
the lack of symmetry.

The most common problem in the application of McNemar’s test is that it becomes
unreliable when numerous expected cell values are near zero. The guidelines
usually given for the chi-square test for independence and homogeneity are also
applicable to McNemar’s test. Snedecor and Cochran (1980, p. 77) give the
following general rules:
1) No expected values should be less than one.
2) Two expected values may be close to one if most of the other expected
values exceed five.
3) Classes with expectations less than one should be combined to meet 1) and
2).

Chapter 11, Association Tests 315


Computational Consult Bishop, Fienberg, and Holland (1975) for more detail on this test and
Notes example.

Two By Two Tables


The Two By Two Tables procedure computes several measures and tests of
association for two by two contingency tables.

Specification When the two by two tables procedure is selected, an empty two by two table is
displayed, with the cursor positioned in the upper left cell:

Simply enter the number you want in each of the four cells, pressing Tab to move
forward to the next cell. Press the OK button to compute the analysis.

Data The cell values must be nonnegative integers. All row and column totals must be
Restrictions greater than zero. The individual cell values can’t exceed 99,999.

Example Suppose 30 people are selected at random and asked two questions. The first
question is whether they favor increasing the budget for space research, and the
second is whether they favor increasing the defense budget. The responses are as
follows:
Increase the space
research budget?
Yes No

Yes 11 3
Increase the
defense budget?
No 6 10

The goal of the analysis is to examine whether the responses to the two questions
are related. That is, if a person favors increasing space research, would he or she
also favor increased defense spending? The results of the analysis are shown on
the next page.

316 Statistix 10 User’s Manual


Two by Two Tables

Total
6 3 9
11 12 23
Total 17 15 32

Fisher Exact Tests


One-tail p 0.2874
Two-tail p 0.4440

Pearson's Chi-Square 0.92 Yule's Q 0.37


P (Pearson's) 0.3369 SE(Q) 0.3539
Yates' Corrected Chi-Sq 0.32 SE(H0: Q=0) 0.3939
P (Yates) 0.5712 Yule's Y 0.19
Log Odds Ratio 0.7802 SE(Y) 0.1977
SE(LOR) 0.8211 SE(H0: Y=0) 0.1970
SE(H0: LOR=0) 0.7879 C Max 0.51
Odds Ratio 2.1818 Phi 0.17
Lower 95%% CI for OR 0.4364 Phi Max 0.59
Upper 95%% CI for OR 10.909 Contingency Coeff 0.17

Descriptions of most of these measures can be found in Bishop et al. (1975) and
the BMDP-83 manual. Notice that where standard errors are reported for a
measure, two types of standard errors are given—unrestricted and restricted. The
restricted standard error is appropriate for constructing hypothesis tests that the
measure equals zero. The unrestricted measure is appropriate for constructing
confidence intervals around the measure if the hypothesis that the measure equals
zero is rejected. Brown and Benedetti (1977) should be consulted for more detail.

Large sample theory can be used to test whether a measure differs from zero. If the
samples are “large enough”, the measures should be nearly normally distributed.
Under the null hypothesis, a measure divided by its restricted standard error should
have a Z, or standard normal distribution. For example, to test whether the log
odds ratio is different from zero, we compute Z = 1.8101 / 0.7385 = 2.451. When
we calculate the p-value for Z = 2.451 with the Z procedure (see Chapter 17
Probability Functions), we find that p = 0.0142, suggesting that the log likelihood
ratio is different from zero. Approximate 95% confidence intervals around the log
odds ratio would be constructed as 1.8101 ± 1.96 × 0.8312 using the unrestricted
standard error.

The results of this analysis suggest that there is, in fact, a relationship between the
responses to the two questions.

Fisher’s exact test is displayed for total sample sizes of 5000 or less. The other
statistics are only displayed for total sample sizes of 26 or greater.

Computational The procedure for determining the probabilities for Fisher’s exact test are
Notes described in Casagrande et al. (1978a). Woolf’s method is used to compute the
confidence intervals for the odds ratio and is described in Lee (1992, p. 286).

Chapter 11, Association Tests 317


Log-Linear Models
The Log-Linear Models procedure fits a hierarchical log-linear model to the data
in a multidimensional contingency table and computes several goodness-of-fit
statistics. You can also save the expected values and residuals.

Specification

The analysis requires a variable which contains the discrete count data. Move the
variable name containing this data to the Dependent Variable box. You can
specify starting values for the estimated cell counts by moving the name of the
variable that contains the starting values to the Starting Values box. This isn’t
necessary normally. Starting values are discussed in detail on page 321.

Log-linear model configurations are specified in a manner quite similar to the


traditional approach of Bishop, Fienberg, and Holland (1975) and Fienberg (1980).
Some examples are given below. The variables X1, X2, …, and XN represent the
categorical variables by which the data in the dependent variable are
cross-classified. The Xi’s should be integers; decimal values are truncated. (Hint:
the Cat function in Transformations is often very useful for generating the
classifying variables.)

Example 1: Two-way table. Model for complete independence.


X1 X2

Example 2: Three-way table. Conditional independence of X1 and X2 given X3.


X1(X3 X2(X3

Example 3: Four-way table. Model includes a three-way interaction.


X1 X2(X3(X4

318 Statistix 10 User’s Manual


Variables within the same interaction term are separated by “(”. Interaction terms
are set off from other interaction terms by spaces. The order in which the terms are
specified in the model doesn’t matter, and neither does the order of variables
within the interaction terms.

Log-Linear Models is based on a procedure called iterative proportional fitting


(IPF), which is described in the references mentioned above. Three aspects of the
IPF procedure can be controlled by the user: (1) starting values for the estimated
expected values, (2) maximum number of iterations, and (3) estimated expected
value convergence criterion, or tolerance. You should specify these only if you’re
certain the defaults are unacceptable. The defaults are appropriate for the majority
of applications. The details of changing the defaults follow.

The starting values for the estimated expected values default to 1.0 unless they’re
specified otherwise. This is appropriate for most log-linear analyses (but see
Specifying Starting Values on page 321). To override the default, a variable
containing the starting values must be supplied. These initial values don’t need to
be integers.

The IPF algorithm continues until one of two termination criteria is satisfied.
These criteria are (1) the number of iterations and (2) the maximum absolute
difference in the cell estimates from one iteration to the next, which is called the
tolerance. The default value for the tolerance is 0.01; if none of the absolute
differences in the estimated expected cell counts from one iteration to the next
exceeds the tolerance value, satisfactory convergence is assumed to have occurred.
If the maximum allowable number of iterations is reached, it’s assumed that the
procedure didn’t converge satisfactorily and no results will be given. The default
for the maximum allowable number of iterations is 25.

Decreasing the size of the tolerance often increases the required number of
iterations. The default tolerance value will usually be satisfactory.

Data The maximum number of classifying variables that are allowed in one model is
Restrictions seven. Within each classifying variable, the maximum number of classes
permitted is 500. No missing values are allowed in the cell data.

The data in the multidimensional contingency table may have been generated by
either a Poisson, multinomial, or product multinomial sampling scheme. Consult
Bishop et al. (1975) for more detail. It’s assumed that the model to be fitted is
hierarchical; all lower-order interactions that are subsets within the specified
configurations are always included in the model.

Example The example is from Table 3-2 of Fienberg (1980). The variable Counts contains
the counts for two species of tree-dwelling lizards. The object of the analysis is to
examine how the height of a perch, the diameter of a perch, and the species of

Chapter 11, Association Tests 319


lizard influence perch selection. The categorical variable Species indicates which
species a count is for. There are two perch-diameter classes recorded in the
variable Diameter. Likewise, there are two perch-height classes in Height. The
data are displayed below, and are available in the file Sample Data\Lizards.sx.

CASE Counts Species Height Diameter


1 48 sagrei # 5.0 # 2.5
2 84 sagrei # 5.0 > 2.5
3 15 sagrei > 5.0 # 2.5
4 18 sagrei > 5.0 > 2.5
5 3 angusticeps # 5.0 # 2.5
6 2 angusticeps # 5.0 > 2.5
7 21 angusticeps > 5.0 # 2.5
8 1 angusticeps > 5.0 > 2.5

A variety of models could be fitted to this data. Refer to Bishop et al. (1975) and
Fienberg (1980) for strategies for finding the “best” models. We’ll use the model
of conditional independence. That is, suppose it’s hypothesized that, given the
species of a lizard, the diameter of the perch it selects is independent of the height
of the perch. The model is specified in the dialog box on page 318. The results
are displayed below.

Log-Linear Model Analysis on Counts

Configuration 1 Species*Height
Configuration 2 Species*Diameter

Goodness-of-fit Summary Statistics

Statistic Chi-Sq DF P
Pearson 6.11 2 0.0471
Likelihood 4.88 2 0.0871
Freeman-Tukey 3.99 2 0.1360

Number of Near Zero Expected Cells 0


Number of Iterations Performed 2
Termination Criterion Difference 0.10

The Pearson, likelihood ratio, and Freeman-Tukey goodness-of-fit statistics are


discussed in Bishop et al. (1975). In this example, it’s difficult to decide whether
the model of conditional independence fits the data; the Pearson chi-square would
lead to the rejection of this model at the 5% significance level, while the other two
goodness-of-fit statistics would not. It’s often desirable to look at the results for all
feasible models.

The degrees of freedom are computed as the total number of cells in the
multidimensional table minus the number of independent parameters that were
estimated. This is appropriate for a “complete” table that has no zero marginals.
(See Bishop et al. for the definition of complete tables.) Incomplete tables or zero
marginals will require special treatment. The number of near zero cell estimates
are displayed to indicate when the degrees of freedom need to be adjusted.

320 Statistix 10 User’s Manual


Specifying Occasionally, it’s useful to specify starting values for the estimated cell counts.
Starting One such circumstance is when the cell counts follow a Poisson distribution and
Values the counts have been taken from different reference populations. Haberman (1978)
and Heisey (1985) give some examples of such analyses; the starting values will
often be non-integer values.

Heisey (1985) gives an example where the goal is to examine factors affecting the
preferences shown by white-tailed deer for different habitat types. Frequent use of
a habitat may result from a preference for that habitat or from that habitat just
being common, so it’s desirable to adjust for the areas of habitats available to the
animals. In Heisey’s study, the categorical variable indicating the habitat type was
called Habitat and the areas of the different habitats in the study area were called
Area. The categorical variables (“factors”) were Deer and Time—the animal
under observation and the time of day, respectively. The number of times a deer
was found in a particular habitat was Use. The best model was found to be:

The main point of interest is that Area was adjusted for by using it as the starting
values for the estimated expected cell frequencies. Consult Heisey (1985) for
more detail.

Another situation where the user will want to specify initial values is for models of
quasi-independence; the starting values will be either zero or one (Fienberg 1980).
Quasi-independence is often of interest for tables that by definition must include
empty cells, or so-called structural zeros. Incomplete tables are handled in a
similar way; missing cells are given a starting value of zero. A note is given in the
results when initial values are specified.

Save The results menu for log-linear models includes a procedure to save residuals and
Residuals expected values as new variables. You can save standardized, maximum

Chapter 11, Association Tests 321


likelihood, Freeman-Tukey, or raw residuals. You save residuals by entering
variable names in the dialog box below.

Enter a new or existing variable name in the field next to the residual or expected
value you want to save. You can press the down-arrow button next to a box to
display the list of existing variables. You can select as many options as you like.
When you’ve entered all the names you want, press the OK button.

Residuals are useful for diagnosing why a model doesn’t fit the data well, which in
turn may suggest models that will fit well. When the model is correct, the
Freeman-Tukey and standardized residuals are approximately normally distributed
with a mean of zero and a variance slightly less than one.

Computational Iterative proportional fitting is used to estimate the expected cell frequencies. The
Notes program was generally patterned after Haberman (1972).

Spearman Rank Correlations


The Spearman Rank Correlations procedure computes the Spearman rank
correlation coefficient matrix for a list of variables. Typical significance tests for
simple correlations usually require that the samples follow a bivariate normal
distribution. This is often a difficult assumption to justify, especially for ordinal
data, i.e., ranks. Spearman rank correlations are suitable for examining the degree
of association when the samples violate the assumption of bivariate normality.

Specification Select the variables for which you want to compute rank correlations. Highlight
the variables you want to select in the Variables list box, then press the right-arrow
button to move them to the Correlation Variables list box. To highlight all
variables, click on the first variable in the list, and while holding the mouse button
down, drag the cursor to the last variable in the list. Check the Compute P-Values
check box to have p-values for the correlation coefficients computed and reported.

322 Statistix 10 User’s Manual


Data Up to 100 variables can be specified. If a case in your data has missing values for
Restrictions any variable, the entire case is deleted (listwise deletion).

Example We use the data of Hald (1952) used in Draper and Smith (1981) for our example.
This data set is also used for the example for the Correlations procedure in
Chapter 8. The variable Heat is the cumulative heat of hardening for cement after
180 days. The variables Chem1, Chem2, Chem3, and Chem4 are the percentages
of four chemical compounds measured in batches of cement. The data are listed
below, and are stored in the file Sample Data\Hald.sx.

Case Heat Chem1 Chem2 Chem3 Chem4


1 78.5 7 26 6 60
2 74.3 1 29 15 52
3 104.3 11 56 8 20
4 87.6 11 31 8 47
5 95.9 7 52 6 33
6 109.2 11 55 9 22
7 102.7 3 71 17 6
8 72.5 1 31 22 44
9 93.1 2 54 18 22
10 115.9 21 47 4 26
11 83.8 1 40 23 34
12 113.3 11 66 9 12
13 109.4 10 68 8 12

The analysis is specified above. The results are displayed on the next page.

The Spearman correlation coefficient is the usual (Pearson product moment)


correlation coefficient computed from the rank scores of the data rather than the
original data. If ties are found when the data are ranked, the average rank is
assigned to the tied values, as suggested by Hollander and Wolfe (1973). Values

Chapter 11, Association Tests 323


are considered to be tied if they are within 0.00001 of one another. A message
“corrected for ties” is displayed in the first line of the report when ties are found.

Spearman Rank Correlations, Corrected for Ties

Heat Chem1 Chem2 Chem3 Chem4


Heat 1.0000
Chem1 0.7912 1.0000
Chem2 0.7373 0.3301 1.0000
Chem3 -0.4488 -0.7186 0.0527 1.0000
Chem4 -0.7521 -0.3320 -0.9903 -0.0806 1.0000

Maximum Difference Allowed Between Ties 0.00001

Cases Included 13 Missing Cases 0

A similar nonparametric correlation coefficient is Kendall’s tau. In most cases,


inference based on Kendall’s tau will produce results nearly identical to that based
on Spearman’s rho (Conover, 1999).

Computational The ranks are first computed for the data. The correlations are then computed with
Notes the same procedures used to produce the Pearson correlations. See Correlations
(Pearson) in Chapter 8 for more detail.

324 Statistix 10 User’s Manual


C H A P T E R

12

Randomness/Normality Tests
Statistix offers three procedures for testing data for randomness and normality.

The Runs Test is useful for examining whether samples have been drawn at
random from a single population. It can detect patterns that often result from
autocorrelation.

The Shapiro-Wilk Test calculates the Shapiro-Wilk statistic with p-value used to
test whether data conform to a normal distribution.

The Normal Probability Plot produces a rankit plot, also useful for examining
whether data conform to a normal distribution.

Runs Test
The Runs Test procedure examines whether the number of runs found in a
variable are consistent with the hypothesis that the samples are order-independent.
A run is defined as consecutive samples that are either consistently above or below
the sample median (mean). Autocorrelation often results in more or fewer runs
than if the samples are independent.

Specification

Chapter 12, Randomness’/Normality Tests 325


Select the names of the variables that you want to test for runs. A separate runs
test is performed for each variable you move to the Runs Test Variables list box.

Choose either the median or the mean for the Cutoff Point. Using the mean can
work better than the median when the data contain only a few discrete values that
would result with a large number of ties with the median.

Example We’ll apply the runs test to residuals resulting from a linear regression analysis.
The data—the Hald data from Draper and Smith (1981)—are used for the example
data set in Linear Regression. Standardized residuals were computed for the
regression model Heat = Chem1 Chem4 and stored in the variable StdRes. The
runs test is specified by selecting the variable name as shown above. The results
are presented below.

Runs Test for StdRes

Median 0.0553
Values Above the Median 6
Values below the Median 6
Values Tied with the Median 1
Runs Above the Median 4
Runs Below the Median 4
Total Number of Runs 8
Expected Number of Runs 7.0

P-Value, Two-Tailed Test 0.7835


Probability of getting 8 or fewer runs 0.8247
Probability of getting 8 or more runs 0.3918

A value was counted as a tie with the Median


if it was within 0.00001

Cases Included 13 Missing Cases 0

In residual analysis, you’re more likely to see too few runs rather than too many.
Too few runs result from runs generally being too long, which indicates positive
autocorrelation. When observed in residuals, this could be because you didn’t
include important explanatory variables in the model. The one-tailed p-value of
0.8247 indicates that there is no evidence for too few runs in these residuals.

Too many short runs is less common, and results from negative autocorrelation.
An example where negative autocorrelation can be expected is in a situation where
a process is being constantly monitored and adjusted, and there’s a tendency
toward overcompensation when adjustments are made. The probability of getting
eight or more runs in our example is 0.3918, which again is no cause for alarm.

The Durbin-Watson statistic in Linear Regression is also very important for


diagnosing autocorrelation in regression residuals.

326 Statistix 10 User’s Manual


Computational The equations used to calculate the runs probabilities can be found in Bradley
Notes (1968, p. 254). These exact equations are used unless the number of values above
or below the median exceeds 20, in which case normal approximations are used.

Shapiro-Wilk Normality Test


The Shapiro-Wilk Test examines whether data conforms to a normal distribution.
The W statistic and corresponding p-values are calculated.

Specification

Move the variables you want to test to the Test Variables box.

Example We’ll apply the Shapiro-Wilk test to residuals resulting from a linear regression
analysis. The data—the Hald data from Draper and Smith (1981)—are used for
the example data in Linear Regression. Standardized residuals were computed for
the regression model Heat = Chem1 Chem4 and stored in the variable StdRes. If
the assumptions of linear regression are met, the standardized residuals should be
approximately normally distributed with mean 0 and variance 1. The variable
StdRes is selected for the Shapiro-Wilk test in the dialog box on the preceding
page. The results are presented below.

Shapiro-Wilk Normality Test

Variable N W P
StdRes 13 0.9668 0.8541

The W statistic approaches one for normally distributed data. We reject the null
hypothesis that the data are normally distributed when the p-value is small (e.g.,
smaller than 0.05). We conclude from the p-value of 0.8541 for the variable
StdRes that the residuals are normal and that this assumption for linear regression
has been satisfied.

Chapter 12, Randomness’/Normality Tests 327


Computational Algorithms used for the W statistic and the p-value are given in Royston (1995).
Notes

Normal Probability Plot


The normal probability plot, also called a rankit plot, plots the ordered data points
against the corresponding rankits. When the data plotted are drawn from a normal
population, the points appear to fall on a straight line. The Shapiro-Wilk W
statistic is also reported on the plot.

Specification

Select the name of the variable you want to plot and move it to the Plot Variable
box. Checking the Display Reference Line box adds a reference line drawn
through the first and third quartiles.

Example We’ll apply the Normal Probability Plot procedure to residuals resulting from a
linear regression analysis. The data—the Hald data from Draper and Smith
(1981)—are used for the example data in Linear Regression. Standardized
residuals were computed for the regression model Heat = Chem1 Chem4 and
stored in the variable StdRes. The variable StdRes is selected as the plot variable
in the dialog box above. The results are presented on the next page.

If the assumptions of linear regression are met, the standardized residuals should
be approximately normally distributed with mean 0 and variance 1. The i-th rankit
is defined as the expected value of the i-th order statistic for the sample, assuming
the sample was from a normal distribution. The order statistics of a sample are the
sample values reordered by their rank. If the sample conforms to a normal
distribution, a plot of the rankits against the order statistics should result in a
straight line, except for random variation.

328 Statistix 10 User’s Manual


Systematic departure of the normal probability plot from a linear trend indicates
non-normality, as does a small value for the Shapiro-Wilk W statistic (see page
327). The example plot above shows no evidence of non-normality. One or a few
points departing from the linear trend near the extremes of the plot are indicative
of outliers. Consult Daniel and Wood (1971), Daniel (1976), and Weisberg (1985)
for more detail.

Computational Rankits are computed with an algorithm similar to Royston’s (1982) NSCOR2.
Notes The procedure for calculating the required percentage points of the normal
distribution and the Shapiro-Wilk W statistic uses the algorithms provided by
Royston (1995).

Chapter 12, Randomness’/Normality Tests 329


C H A P T E R

13
Time Series
A time series is a list of observations collected sequentially, usually over time.
Common time series subjects are stock prices, population levels, product sales,
rainfall, and temperature. It’s assumed that the observations are taken at uniform
time intervals, such as every day, month, or year. Not all time series occur over
time. For example, a list of diameters taken at every meter along a telephone cable
is a legitimate time series.

Observations in a time series are often sequentially dependent. For example,


population levels in the future often depend on levels at present and in the past.
The goal of time series analysis is to model the nature of these dependencies,
which in turn allows you to predict, or forecast, observations that have not yet been
made.

The Time Series Plot procedure is used to create a time series plot for one or more
variables.

The Autocorrelation Plot procedure is used to create an autocorrelation plot for a


specified variable.

The Partial Autocorrelation Plot procedure is used to create a partial


autocorrelation plot for a specified variable.

The Cross Correlation procedure is used to create a cross correlation plot for two
variables.

The Moving Averages procedure is used to compute forecasts for time series data
based on moving averages.

The Exponential Smoothing procedure computes forecasts for time series data
using exponentially weighted averages.

The SARIMA procedure allows you to fit a variety of models to data, including
both nonseasonal and multiplicative, and nonmultiplicative seasonal models.

Model Building The methods described by Box and Jenkins (1976) is a popular tool for modeling
time series. The Box-Jenkins approach assumes that the time series can be
represented as an ARIMA process, which stands for AutoRegressive Integrated
Moving Average.

330 Statistix 10 User’s Manual


Box and Jenkins advocate an iterative three-step approach to model building:
1) Identification of terms to be included in the model
2) Parameter estimation
3) Model evaluation

Model term identification relies on the use of Time Series Plots, Autocorrelation
Plots, and Partial Autocorrelation Plots. These plots are examined to suggest what
transformations, differencing operators, AR terms, and MA terms should be
included in the model.

Once a tentative model has been identified, parameter estimation is accomplished


with the SARIMA procedure, which uses the unconditional least squares method
(sometimes called the backcasting method) to fit the identified model to the series.

Model evaluation is accomplished by examining the results of the SARIMA


fitting. If the model isn’t adequate, a new tentative model is identified and the
process repeated until a good model is found. A good model can then be used to
forecast future observations. Box and Jenkins (1976) should be consulted for more
details.

The forecasting procedures Exponential Smoothing and Moving Averages are


easier to use and understand than ARIMA models.

Treatment of Time series data sets can’t have embedded missing values or omitted cases. There
Missing or can be blocks of missing or omitted cases at the beginning and end of the data set.
Omitted Statistix time series procedures use the first continuous block of data that doesn’t
Cases contain missing values or omitted cases for the series.

Time Series Plot


The Time Series Plot procedure is used to create a time series plot for one or more
variables. The values of the variables are plotted in case order.

Specification Move the names of one or more variable to the Time Series Variables box. If you
select more than one, all the variables will be plotted on a single time series plot
using different colors, line patterns, and point symbols.

Normally, the points along the X axis are labeled as case numbers starting with 1.
One way you can customize the X axis labels is to specify an X Axis Label Var
containing the labels. The label variable can be a string, date, integer, or real
variable. Strings are truncated to 20 characters.

Chapter 13, Time Series 331


A second way to customize the X axis labels is to use the Origin option to change
the starting case number for the X axis labels. For example, if you have annual
data starting with 1955, enter 1955 in the Origin box to have the axis labeled 1955,
1956, and 1957 instead of 1, 2, and 3.

Numbering the points using digits is useful for identifying seasonal trends. You
should enter a value for Period that reflects the frequency of data collection. For
example, enter 4 for quarterly data or a 12 for monthly data. A seasonal trend
stands out clearly when the same digit appears in either peaks or valleys.

The Period also affects how often the X axis is labeled. The example dialog box
shown above specifies a period of 12, which is suitable for labeling monthly data.
The X axis will be labeled every 12 cases.

Choose one of four Plot Types: A Symbol Plot shows a symbol for each case
without lines connecting them. A Connected Plot draws lines between consecutive
points without using symbols. A Connected Symbols Plot plots symbols and
connects the points. A Vertical Line Plot draws a series of vertical bars
representing each case.

Points on the plot can be marked using sequence plots symbols (your selections on
the Settings | Plot Symbols dialog box) or digits. Make your selection using the
Mark Points list.

You can enter values for low, high, and step to control the Y-Axis scale.

332 Statistix 10 User’s Manual


Example The example data are the natural logs of monthly passenger totals (in thousands) in
international air travel for 12 years from January 1949 to December 1960 (Box and
Jenkins 1976, p. 304). You can view the data by opening Sample Data\Airline.sx.
A total of 144 cases are in the variable named Z. The variable Year was created to
annotate the X axis of our example plot—the year was entered for every 12th case.
The dialog box on the preceding page is used to obtain the time series plot below.

Autocorrelation
The Autocorrelation Plot procedure is used to create an autocorrelation plot for
the specified variable. Approximate 95% confidence intervals are also shown.

Specification

Select the name of a time series variable and move it to the Times Series Variable
box. You can specify a lag in the Lag edit control. If you don’t specify a lag, the

Chapter 13, Time Series 333


maximum lag is used, which is calculated as the square root of the sample size plus
five.

Example We use the data from Box and Jenkins (1976, p. 304) for our example (see the
sample file Sample Data\Airline.sx). The data are the natural logs of monthly
passenger totals (in thousands) in international air travel for 12 years from January
1949 to December 1960. A total of 144 cases are in a variable named Z. The
variable W is created by first seasonally differencing Z and then nonseasonally
differencing the seasonal difference. Using Transformations from the Data menu,
W is created in two steps by:
W = Z - LAG (Z, 12)
W = W - LAG (W)

That is, W = DD12Z, where D is the differencing operator.

The dialog box above shows the entries for the differenced airline data. A specific
lag is not specified, so the maximum lag is used. The results are presented in the
autocorrelation plot on the next page.

The autocorrelations are displayed graphically as vertical bars. Approximate 95%


confidence bounds are indicated with horizontal lines. The values of the auto-
correlations are listed along the top of the plot. The confidence intervals for the
lag p are based on the assumption that autocorrelations for lags p and greater are
zero.

Computational Computations follow those outlined in Box and Jenkins (1976).


Notes

334 Statistix 10 User’s Manual


Partial Autocorrelation
The Partial Autocorrelation Plot procedure is used to create a partial
autocorrelation plot for the specified variable. Approximate 95% confidence
intervals are also displayed.

Specification

Select the name of a Time Series Variable. You may specify a Lag. If you don’t,
the maximum lag is used, calculated as the square root of the sample size plus five.

Example We use the data from Box and Jenkins (1976, p. 304) for our example. The data
are the natural logs of monthly passenger totals (in thousands) in international air
travel for 12 years from January 1949 to December 1960. The dialog box below
shows the entries for the differenced airline data in the variable W (page 334). A
specific lag isn’t specified, so the maximum lag is used. The resulting partial
autocorrelation plot is shown below.

Chapter 13, Time Series 335


The partial autocorrelations are displayed graphically as vertical bars.
Approximate 95% confidence bounds are indicated with horizontal lines. The
values of the partial autocorrelations are listed along the top of the plot.

The confidence intervals for lag p are based on the assumption that the series
results from an autoregressive process of order p - 1. Computations follow those
outlined in Box and Jenkins (1976).

Cross Correlation
The Cross Correlation procedure is used to create a cross correlation plot for two
variables.

Specification

Select the names of the two Time Series Variables. Unless a Lag is explicitly
specified, the maximum absolute value of the lag is used, computed as the square
root of the sample size plus five.

Example To demonstrate cross correlation, we’ll fabricate some data. The variable Event1
is created simply as a list of uniform random numbers. We then create the variable
Event2 as a moving average process of Event1. These two variables are generated
in Transformations as:
Event1 = Random(Uniform,0,1)
Event2 = Event1 + Lag(Event1)/2 + Lag(Event1,2)/3 +
Lag(Event1,6)/4

The dialog box displayed above is used to find the cross correlations for Event1
and Event2 for 10 lags. The resulting plot is presented on the next page.

Computational Computations follow those outlined in Box and Jenkins (1976).


Notes

336 Statistix 10 User’s Manual


Moving Averages
The Moving Averages procedure is used to compute forecasts for time series data.
Both single and double moving averages techniques are available. Use single
moving averages when there is no trend in the time series data. Use double
moving averages when there is a trend in the data.

Specification

Select the name of a Time Series Variable. Then select single or double moving
averages using the Model radio buttons.

Enter a value for the number of periods in the moving average in the Periods in an

Chapter 13, Time Series 337


Average edit control. In general you can select a value for the period that
minimizes the maximum absolute deviation (MAD) or the mean squares of the
forecast errors (MSE). The number of periods in the moving average is sometimes
selected to remove seasonal effect; 4 is used for quarterly data, 12 for monthly
data, and so on.

When using double moving averages, you can also specify the number of future
periods for which you want to compute forecasts in the Periods to Forecast box.

Data There must be enough cases in the time series to compute the moving averages.
Restrictions The minimum number depends on the value you enter for the length of the moving
averages. If the number of periods you specify for the moving averages is n, there
must be at least n + 1 cases for single moving averages and 2 x n cases for double
moving averages.

Example We illustrate single moving averages using earnings per share for Exxon for the
years 1962 to 1976. The data are stored in Sample Data\Exxon.sx.

Year Earnings
1962 1.94
1963 2.37
1964 2.44
1965 2.41
1966 2.53
1967 2.77
1968 2.97
1969 2.89
1970 2.96
1971 3.39
1972 3.42
1973 5.45
1974 7.02
1975 5.60
1976 5.90

The dialog box on the preceding page shows the moving averages options selected
for the Exxon data. The results are displayed on the next page.

The results list a number of summary statistics that are useful for checking the
adequacy of the model. The forecast for the year 1977 is displayed with a 95%
confidence interval.

338 Statistix 10 User’s Manual


Single Moving Averages for Earnings

Moving Average Length 4

Sum of Squared Errors (SSE) 17.2542


Mean Squared Error (MSE) 1.56857
Standard Error (SE) 1.25242
Mean Absolute Deviation (MAD) 0.82386
Mean Abs Percentage Error (MAPE) 16.84
Mean Percentage Error (MPE) 16.84
Number of Cases 15

95% C.I. 95% C.I.


Lead Lower Bound Forecast Upper Bound
1 3.53775 5.99250 8.44725

Moving Once the analysis is computed and displayed, the Results menu icon, which
Averages appears as a shovel (think “dig-deeper”), on the toolbar is enabled.
Results Menu

Selecting Coefficient Table from the menu will redisplay the results presented
above. The Forecast Plot produces a time series plot of the actual and fitted data.
The Residuals Plot produces a time-series plot of the residuals. The remaining
menu selections are discussed below.

Forecast The Forecast Table lists the actual value, moving average, forecast, and forecast
Table error for each time period in the data.

Single Moving Averages for Earnings

Moving Average Length 4

Sum of Squared Errors (SSE) 17.2542


Mean Squared Error (MSE) 1.56857
Standard Error (SE) 1.25242
Mean Absolute Deviation (MAD) 0.82386
Mean Abs Percentage Error (MAPE) 16.84
Mean Percentage Error (MPE) 16.84
Number of Cases 15

95% C.I. 95% C.I.


Lead Lower Bound Forecast Upper Bound
1 3.53775 5.99250 8.44725

Chapter 13, Time Series 339


Save Use the Save Residuals procedure to save the fitted values (forecasts) and/or
Residuals residuals (forecast errors) in new or existing variables for later analysis. Enter a
variable name in the Fitted Values Variable box and/or the Residuals Variable
box.

Computational Computations follow those described in Hanke and Reitsch (1989) and Mercier
Notes (1987).

Exponential Smoothing
The Exponential Smoothing procedure computes forecasts for time series data
using exponentially weighted averages. Five different models are available to fit
no trend data, trend data, and trend with seasonal data.

Specification

340 Statistix 10 User’s Manual


First select the name of a Time Series Variable. Press the down-arrow button to
display the list of variables in your data set.

Next select the model you want to use from the Model radio buttons. Single
exponential smoothing is used when the time series data doesn’t exhibit any trend.
The smoothing constant Alpha determines how much past observations influence
the forecast. A small smoothing constant results in a slow response to new values;
a large constant results in a fast response to new values. Values for the smoothing
constant are normally selected in the range 0.05 to 0.60. For the best model, select
a smoothing constant that minimizes the mean squares of the forecast errors
(MSE).

Brown’s 1-P linear model and Holt’s 2-P linear model are used when the data
exhibit trend. Brown’s model uses one smoothing constant (Alpha) to smooth
both the local average and trend estimates. Holt’s model uses two smoothing
constants, one for the smoothed local average (Alpha) and one for the trend
estimate (Beta).

Winter’s models are used when the data exhibit trend and seasonality. The
additive model is used when the seasonal influences are additive, that is, the
seasonal influences have the same magnitude from year to year. The multiplicative
model is appropriate when the seasonal swings are wider for years with higher
levels. Both models use three smoothing constants, for the local average (Alpha),
trend (Beta), and season (Gamma). You must also specify the Season Length.

All five models require that you enter values for the applicable Smoothing
Constants. Some models give you the option of entering Initial Values and the
number of future Periods to Forecast.

Example Brown’s 1-P linear exponential smoothing is illustrated using earnings per share
data for Exxon for the years 1962 to 1976. The data are listed on page 338, and are
stored in the file Sample Data\Exxon.sx.

The dialog box above shows the exponential smoothing options selected for the
Exxon data. The results are presented on the next page.

The coefficient table above lists a number of summary statistics that are useful for
checking the adequacy of the model (Hanke and Reitsch 1989). You can also use
the residuals (see Save Residuals on page 343) to examine the model fit.

The forecasts for the years 1977 to 1979 are displayed with their 95% confidence
intervals.

Chapter 13, Time Series 341


Brown's 1-P Linear Exponential Smoothing for Earnings

Smoothing Constant 0.1

Sum of Squared Errors (SSE) 9.47382


Mean Squared Error (MSE) 0.63159
Standard Error (SE) 0.79472
Mean Absolute Deviation (MAD) 0.59165
Mean Abs Percentage Error (MAPE) 16.57
Mean Percentage Error (MPE) -1.81
Number of Cases 15

Forecast (T) = 5.84858 + 0.31442 * T

95% C.I. 95% C.I.


Lead Lower Bound Forecast Upper Bound
1 4.60534 6.16300 7.72066
2 4.91179 6.47742 8.04305
3 5.21789 6.79184 8.36579

Exponential Once the analysis is computed and displayed, the Results menu icon, which
Smoothing appears as a shovel (think “dig-deeper”), on the toolbar is enabled. Click on the
Results Menu icon to display the results menu, as shown below.

Selecting Coefficient Table from the menu will redisplay the results presented
above. The Residuals Plot produces a time-series plot of the residuals. The
remaining menu selections are discussed below.

Forecast The forecast table lists the actual value, first and second exponential averages,
Table slope, forecast, and forecast error for each time period in the data. The forecast
table for the model specified on page 340 is displayed on the next page.

342 Statistix 10 User’s Manual


Brown's 1-P Linear Exponential Forecast Table for Earnings

Actual 1st Exp 2nd Exp Forecast


Time Value Average Average Trend Forecast Error
0 -1.62107 -4.38729 0.307
1 1.94000 -1.26496 -4.07505 0.312 1.45250 0.48750
2 2.37000 -0.90147 -3.75769 0.317 1.85736 0.51264
3 2.44000 -0.56732 -3.43866 0.319 2.27212 0.16788
4 2.41000 -0.26959 -3.12175 0.317 2.62305 -0.21305
5 2.53000 0.01037 -2.80854 0.313 2.89948 -0.36948
6 2.77000 0.28633 -2.49905 0.309 3.14249 -0.37249
7 2.97000 0.55470 -2.19368 0.305 3.38120 -0.41120
8 2.89000 0.78823 -1.89549 0.298 3.60845 -0.71845
9 2.96000 1.00541 -1.60540 0.290 3.77014 -0.81014
10 3.39000 1.24387 -1.32047 0.285 3.90630 -0.51630
11 3.42000 1.46148 -1.04228 0.278 4.09313 -0.67313
12 5.45000 1.86033 -0.75201 0.290 4.24343 1.20657
13 7.02000 2.37630 -0.43918 0.313 4.76294 2.25706
14 5.60000 2.69867 -0.12540 0.314 5.50461 0.09539
15 5.90000 3.01880 0.18902 0.314 5.83652 0.06348
16 6.16300
17 6.47742
18 6.79184

Forecast Plot Selecting Forecast Plot from the menu produces a time series plot that shows both
the actual time series data and the fitted data. The forecasts are also plotted. The
dialog box provides options to label the X axis in the same manner as the Time
Series Plot (page 332). Values of 4 for Period and 1962 for Origin were used to
produce the plot for the Exxon data shown below.

Save Use the Save Residuals procedure to save the fitted values (forecasts) and/or
Residuals residuals (forecast errors) in new or existing variables for later analysis. Enter a

Chapter 13, Time Series 343


variable name in the Fitted Values Variable box and/or the Residuals Variable
box.

Computational Computations for the single and linear trend models follow those described in
Notes Abraham and Ledolter (1983). Computations for Winter’s models follow
Thomopoulos (1980).

SARIMA
The SARIMA procedure allows you to fit a variety of models to data, including
both nonseasonal and multiplicative, and nonmultiplicative seasonal models.
SARIMA stands for Seasonal AutoRegressive Integrated Moving Average. It’s a
procedure for modeling time series popularized by Box and Jenkins (1976). MA
and AR terms need not be sequential, so nonsignificant terms don’t need to be
included in the model.

Output includes parameter estimates, approximate significance levels, and several


statistics useful for diagnosing model fit. You can also obtain forecasts with
confidence intervals. You can save fitted values and residuals to evaluate model
adequacy. Estimation is based on unconditional least squares, also known as the
backcasting method.

Specification The dialog box appears on the next page. First select the name of your Time
Series Variable. You can press the down-arrow button next to the variable box to
display the list of your variables. Next you specify the ARIMA model by filling in
values in the relevant controls. You can also specify the Marquardt and Nelder-
Mead criteria, the maximum number of iterations, and initial values. Press the OK
button to begin the analysis.

For example, the dialog box below fits the ARIMA model (0, 1, 1)X(0, 1, 1)12 to
the airline data described on page 346.

344 Statistix 10 User’s Manual


A description of each of the dialog box fields is listed below:

Time Variable: Variable name for the time series variable


AR Lags: Lags of nonseasonal autoregressive terms in model
Nonseasonal d: Order of nonseasonal differencing
MA Lags: Lags of nonseasonal moving average terms in model
SAR Lags: Seasonal lags of seasonal autoregressive terms in model
Seasonal D: Order of seasonal differencing
SMA Lags: Seasonal lags of seasonal moving average terms in model
Season Length: Period of seasonality
Fit Constant: Model should include a constant term
Marquardt Criterion: Criterion to stop nonlinear least squares
Nelder-Mead Search: Perform simplex search after nonlinear least squares
Nelder-Mead Criterion: Criterion to stop Nelder-Mead simplex search
Maximum iterations: Number of iterations allowed
Initial Values: List of starting values for any of the model parameters

Most of these fields are self-explanatory. Estimation is initially performed with


Marquardt’s nonlinear least squares procedure. After each iteration, the changes in
the parameter estimates are checked. If all parameters have changed less in
absolute value than the Marquardt Criterion, the procedure terminates and is
assumed to have converged successfully.

If you checked the Nelder-Mead Simplex Search box, a Nelder-Mead simplex

Chapter 13, Time Series 345


search is performed after the Marquardt procedure in an attempt to further reduce
the unconditional sums of squares. Unlike the Marquardt criterion, the Nelder-
Mead criterion is based on the reduction of the unconditional sums of squares.
After each iteration, the reduction in the unconditional sums of squares is checked.
If the reduction is less than the number specified for the Nelder-Mead Criterion,
the procedure terminates and convergence is assumed.

Both Marquardt and Nelder-Mead stop when the number of iterations equals
Maximum Iterations if convergence hasn’t occurred.

Note: When specifying AR, MA, SAR, or SMA terms, all lags must be specified.
For example, to specify an AR(3) model, you would enter “1 2 3”, not “3” The
latter specification is appropriate if you want the AR coefficients for lags 1 and 2
constrained to zero.

Example We use the data from Box and Jenkins (1976, p. 304) for our example. You can
view the data by opening Sample Data\Airline.sx. The data in the variable Z are
the natural logs of monthly passenger totals (in thousands) in international air
travel for 12 years from January 1949 to December 1960. The variable W is
created by first seasonally differencing Z and then nonseasonally differencing the
seasonal difference. Using Transformations from the data menu, W is created in
two steps by:
W = Z - Lag(Z, 12)
W = W - Lag(W)

That is, W = DD12Z, where D is the differencing operator. The dialog box on the
preceding page specifies the ARIMA model (0, 1, 1)X(0, 1, 1)12.

The results are summarized in the coefficient table on the next page.

Parameter significance can be judged with the t-like statistic Coef/SE. The p-
value for this statistic assumes a standard normal distribution. Overall model fit
can be judged with the Ljung-Box statistic (Ljung and Box, 1978), which is
calculated for lags at multiples of 12. Small p-values indicate that the model fit is
poor. In the example, the p-values are large enough to suggest lack of fit isn’t a
problem.

346 Statistix 10 User’s Manual


Unconditional Least Squares SARIMA Model for Z

Nonseasonal Differencing of Order 1


Seasonal Differencing of Order 1, Period 12
NOTE: No constant term in model

Term Coefficient Std Error Coef/SE P


MA 1 0.39308 0.08033 4.89 0.0000
SMA 1 0.61263 0.06941 8.83 0.0000

MS (Backcasts excluded) 1.334E-03


DF 129
SS (Backcasts excluded) 0.17213
SS Due to Backcasts 3.874E-03
N Before Differencing 144
N After Differencing 131
Marquardt Criterion of 0.01 was met.
Simplex Criterion of 0.01 was met.

Ljung-Box Portmanteau Lack-of-fit Diagnostics


LAG DF Chi-Sq P
12 10 9.33 0.5015
24 22 25.43 0.2769
36 34 35.38 0.4028
48 46 44.03 0.5553

As noted, model terms don’t need to be sequential. As an example, we’ll fit a


nonmultiplicative seasonal model to the airline data set.

The model we’ve already used is DD12zt = (1 - ΦB)(1 - ΘB12)at (D is the differ-

Chapter 13, Time Series 347


encing operator; other terms are explained in Box and Jenkins 1976). When
expanded, this model is DD12zt = at - Φat-1 - Θat-12 + ΦΘat-13. The more general
nonmultiplicative model is DD12zt = at - Φ1at-1 - Φ12 at-12 - Φ13 at-13. This model is
specified in the dialog box on the preceding page.

While this model has a smaller MS (0.00126) than the multiplicative model, it also
requires an additional parameter. Box and Jenkins (1976, p. 324) briefly consider
how to examine whether such models are improvements over their multiplicative
counterparts.

SARIMA Once the analysis is computed and displayed, the Results menu icon, which
Results Menu appears as a shovel (think “dig-deeper”), on the toolbar is enabled. Click on the
icon to display the results menu, as shown below.

Select Coefficient Table from the Results menu to redisplay the results presented
on the preceding page. The Residuals Plot produces a time-series plot of the
residuals. The remaining options are discussed below.

Forecast The Forecast Table procedure lets you forecast future observations. It also gives
Table confidence intervals for the forecasts.

First enter a value for the Periods To Forecast, which is the number of future time
intervals you want to forecast. You can also specify the C. I. Percentage
Coverage for computing confidence levels.

Forecasts for our airline ticket sales example are presented below.

348 Statistix 10 User’s Manual


Forecast Table for Z

95% C.I. 95% C.I.


Lead Lower Bound Forecast Upper Bound
1 6.03779 6.10938 6.18098
2 5.97163 6.05537 6.13912
3 6.08325 6.17760 6.27195
4 6.09434 6.19821 6.30209
5 6.11796 6.23056 6.34315
6 6.24756 6.36825 6.48894
7 6.37619 6.50446 6.63274
8 6.36544 6.50088 6.63631
9 6.18287 6.32510 6.46734
10 6.05870 6.20742 6.35614
11 5.90898 6.06392 6.21886
12 6.00841 6.16933 6.33025

Forecast Plot

The Forecast Plot produces a time series plot that shows both the actual time series
data and the fitted data. The forecasts are also plotted.

First enter a value for the Periods To Forecast, which is the number of future time
intervals you want to forecasts plotted. Normally, the points along the X axis are
labeled as case numbers starting with 1. One way you can customize the X axis
labels is to specify an X Axis Label Var containing the labels. The label variable
can be a string, date, integer, or real variable. Strings are truncated to 20
characters.

A second way to customize the X axis labels is to use the Origin option to change
the starting case number for the X axis labels. For example, if you have annual
data starting with 1955, enter 1955 in the Origin box to have the axis labeled 1955,
1956, and 1957 instead of 1, 2, and 3.

Chapter 13, Time Series 349


The Period also affects how often the X axis is labeled. For example, specifying a
period of 12, which is suitable for labeling monthly data, the X axis will be labeled
every 12 cases.

The plot for the airline ticket sales data is shown below.

Save Use the Save Residuals procedure to save the fitted values, residuals (forecast
Residuals errors), or forecasts for later analysis. Because of differencing and lagging, fitted
values and residuals may have fewer cases than the original series.

The Fitted Values Variable stores fitted values corresponding to cases used in the
analysis. The Forecast (Future) Variable is used to save forecasts corresponding
to time periods after the cases used for the analysis. Enter a value for the Periods
to Forecast when using the Forecast (Future) Variable.

350 Statistix 10 User’s Manual


Variance- The Variance-covariance matrix selection displays the variance-covariance matrix
Covariance of the estimated model coefficients.
Matrix
Variance - Covariance Matrix for Coefficients

MA 1 SMA 1
MA 1 6.453E-03
SMA 1 -3.165E-04 4.818E-03

Computational Computations generally follow those outlined in Box and Jenkins (1976). Initial
Notes values for the nonseasonal AR parameters are computed as described on page 499.
These, along with the mean of the series, are used to construct an initial estimate
of the constant (Box and Jenkins 1976, p. 500) if the model contains one. All
other parameters are initially set to 0.1. The Marquardt procedure follows Box and
Jenkins (1976) with modifications suggested by Nash (1979). Numerical
derivatives use “Nash’s compromise” (Nash, 1979, eq. 18.5). The Nelder-Mead
procedure is patterned after Nash’s outline.

Chapter 13, Time Series 351


C H A P T E R

14

Quality Control
Statistix offers a number of quality control or statistical process control (SPC)
procedures. SPC methods are used to improve the quality of a product or service
by examining the process employed to create the product or service.

The Pareto Chart procedure produces a Pareto chart, which is used in SPC to
identify the most common problems or defects in a product or service. It is a
histogram with the bars sorted by decreasing frequency.

A control chart plots a measurement sampled from a process by sample number


over time. A center line is drawn to represent the average value of the quality
characteristic when the process is stable, that is, “in control”. Two lines are drawn
on the control chart to represent the upper and lower control limits (UCL and
LCL). A point that falls outside the control limits is evidence that the process is
“out of control”. Statistix uses “3-sigma” control limits that are computed as the
center line value plus or minus three times the standard deviation of the process
statistic being plotted.

A quality characteristic that can’t be measured on a quantitative scale but can be


classified as conforming or nonconforming, is called an attribute. Consider a
cardboard juice container as an example: The seams are either conforming (will
not leak) or nonconforming (will leak). Statistix computes four attributes control
charts—the p chart, np chart, c chart, and u chart.

The P Chart procedure plots the fraction nonconforming. The Np Chart


procedure plots the number nonconforming.

The C Chart procedure plots the number of nonconformities per inspection unit
(e.g., flaws on the finish of a television set). The U Chart procedure plots the
average number of nonconformities per unit.

A quantitative quality characteristic, such as the diameter of piston rings, is called


a variable. Statistix computes six control charts for variables—the X bar chart, R
chart, S chart, I chart, MR chart, and EWMA chart.

The X Bar Chart procedure plots the average of samples; it’s used to control the
process average of a variable.

352 Statistix 10 User’s Manual


The R Chart procedure plots the sample range. The S Chart procedure plots the
sample standard deviation. These plots are used to control the variability of a
process.

The I Chart procedure plots individuals—variables with a sample size of one.


The MR Chart procedure plots the moving range of individuals.

The EWMA Chart procedure plots an exponentially weighted moving average. It


can be used to control the process mean using individuals or sample averages.

See Montgomery (1991) for computational details for all of the procedures
discussed in this chapter.

Pareto Chart
The Pareto Chart is used to identify the most frequent causes of defects. It
displays a histogram with the bars ordered by frequency.

Specification

First move the name of the variable that contains the defect classifications to the
Defect Cause Variable box. This variable can be of any type—real, integer, date,
or string. Strings are truncated to 20 characters.

You can enter your data one defect at a time, so that each case in your data set
represents one defect. These types of data are often tabulated as they are collected,
in which case it’s more convenient to enter the data in two columns, one for the
defect cause and one for the count of defects. In this case you must move the name
of the variable containing the counts of defects for each cause to the Defect Count
Variable box.

You can limit the number of bars by entering a value in the Maximum Defect
Categories edit control. If there are more categories than this value, the least

Chapter 14, Quality Control 353


frequently recorded categories are excluded from the chart.

Check the Display Cumulative Percent check box to have a cumulative


distribution curve drawn on the Pareto chart.

Example We use data from Montgomery (1991, p. 119) for our example. The various
reasons that aircraft tanks were classified defective were collected over several
months. The data were entered into Statistix using two variables. The variable
Cause is used to identify the defect cause, and the variable Count records the count
for each cause. The data are available in the file Sample Data\Tanks.sx. The
analysis is specified on the preceding page. The results are displayed below.

P Chart
The P Chart is the control chart for the fraction of a sample that is noncon-
forming. P charts are used for attributes—quality characteristics that can be
classified as conforming or nonconforming.

Specification The p chart is computed both from the number of defects per sample and the
sample size. First move the name of the variable that contains the counts of
defects for each sample to the Defect Count Variable box. If the sample size is
constant, select the Constant sample size radio button and enter the sample size in
the Sample Size Constant edit control. If the sample size isn’t always the same,

354 Statistix 10 User’s Manual


select the Variable sample size radio button and move the name of the variable
that contains the sample size for each case to the Sample Size Variable box.

The center line and the control limits are computed from p, the fraction
nonconforming. You can choose to enter a standard or historical value for p for
this purpose, or have p computed from the data. Make your choice by selecting
one of the two p radio buttons. When you select Compute From Data, you may
specify the first and last case number used to compute p. If these case numbers are
left blank, all cases are used. When you select Standard Value, you must enter a
value in the Standard p edit control.

Example We use data from Montgomery (1991, p. 151) for our example. Cardboard cans
being manufactured for orange juice concentrate were sampled from the machine
at half-hour intervals. Each sample contained 50 cans. The number of cans with
defective seams were recorded for each sample. The data are displayed below, and
are available in the file Sample Data\Juice.sx.

Case Defects Case Defects


1 12 16 8
2 15 17 10
3 8 18 5
4 10 19 13
5 4 20 11
6 7 21 20
7 16 22 18
8 9 23 24
9 14 24 15
10 10 25 9
11 5 26 12
12 6 27 7
13 17 28 13
14 12 29 9
15 22 30 6

Chapter 14, Quality Control 355


The p chart model is specified in the dialog box on the preceding page. The results
are shown in the figure below.

The fraction of defective cans is plotted for each sample. The center line is plotted
at the average value for p which is 0.2313. The 3-sigma control limits are labeled
on the right side: UCL = 0.4102, LCL = 0.0524. Two values exceed the UCL,
indicating that the process is out of control. These case numbers (15 and 23) are
noted at the bottom of the chart.

Np Chart
The Np Chart is the control chart for the number of nonconforming units. Np
charts are used for attributes—quality characteristics that can be classified as
conforming or nonconforming. The np chart gives the same results as the p chart
discussed earlier (page 354), the only difference is the units of the vertical axis.

Specification The np chart is computed from the number of defects per sample and the sample
size. First select the name of the variable that contains the counts of defects for
each sample and move it to the Defect Count Variable box. The sample size must
be a constant. Enter the number in the Sample Size box.

The center line and the control limits are computed from p, the fraction
nonconforming. You can choose to enter a standard or historical value for p for
this purpose, or have p computed from the data. Make your choice by selecting
one of the two p radio buttons. When you select Compute From Data, you may

356 Statistix 10 User’s Manual


specify the first and last case number used to compute p. If these case numbers are
left blank, all cases are used. When you select Standard Value, you must enter a
value in the Standard p edit control.

Example We use data from Montgomery (1991, p. 151) for our example. Cardboard cans
being manufactured for orange juice concentrate were sampled from the machine
at half-hour intervals. Each sample contained 50 cans. The number of cans with
defective seams were recorded for each sample. The data are displayed below, and
are available in the file Sample Data\Juice.sx.

Case Defects Case Defects


1 12 16 8
2 15 17 10
3 8 18 5
4 10 19 13
5 4 20 11
6 7 21 20
7 16 22 18
8 9 23 24
9 14 24 15
10 10 25 9
11 5 26 12
12 6 27 7
13 17 28 13
14 12 29 9
15 22 30 6

The Np chart model is specified in the dialog box above. The results are shown in
the figure on the next page.

The number of defective cans is plotted for each sample. The center line is plotted
at the historical value for np, 50 × 0.231 = 11.6. The 3-sigma upper and lower
control limits are labeled on the right side at 20.5 and 2.6. Two values exceed the
UCL at cases 15 and 23, which are noted at the bottom of the chart.

Chapter 14, Quality Control 357


C Chart
The C Chart is the control chart for nonconformities (defects). A product may
contain one or more defects and not be considered defective. If, for example, a
television cabinet has a flaw in the finish, we wouldn’t necessarily want to reject
the television. In these situations we’re more interested in the number of defects
per inspection unit.

Specification

The c chart is computed from the number of defects per inspection unit. Enter the
name of the variable that contains the counts of defects for each inspection unit.

358 Statistix 10 User’s Manual


The center line and the control limits are computed from c, the number of
nonconformities per inspection unit. You can enter a standard or historical value
for c, or you can use the average value of c computed from the data. When c is
estimated from the data, you can specify the first and last case number used to
compute the average value. If these case numbers are left blank, as in the dialog
box above, all cases are used. When you select Standard Value, you must enter a
value in the Standard c edit control.

Example We use data from Montgomery (1991, p. 173) for our example. Printed circuit
boards were inspected for defects. The inspection unit was defined as 100 circuit
boards. The number of defects per 100 circuit boards for 26 samples are listed
below, and are available in the file Sample Data\Circuit.sx.

Case Defects Case Defects


1 21 14 19
2 24 15 10
3 16 16 17
4 12 17 13
5 15 18 22
6 5 19 18
7 28 20 39
8 20 21 30
9 31 22 24
10 25 23 16
11 20 24 19
12 24 25 17
13 16 26 15

The analysis is specified in the dialog box on the preceding page. The resulting c
chart for the variable Defects is displayed below.

Chapter 14, Quality Control 359


The number of nonconformities is plotted for each inspection unit (100 printed
circuit boards). The center line is plotted at the average value for c 19.8. The 3-
sigma upper and lower control limits are labeled on the right side at 33.2 and 6.5.
The value at case 6 is below the LCL, and the value at case 20 is above the UCL.
The process isn’t in control.

U Chart
The U Chart is the attribute control chart for nonconformities per unit. It is used
in controlling the nonconformities per unit when the sample size isn’t one
inspection unit. The c chart discussed earlier (page 358) is used when the sample
size is one.

Specification

First move the name of the variable containing the number of defects for each
sample to the Defect Count Variable box. If the sample size is constant, select the
Constant sample size radio button and enter the sample size in the Sample Size
Constant edit control. If the sample size isn’t always the same, select the Variable
sample size radio button and move the name of a second variable that contains the
sample size for each case to the Sample Size Variable box.

The center line and control limits are computed from u, the number of
nonconformities per unit. You can enter a standard or historical value for u, or you
can use the average value of u computed from the data. When u is estimated from
the data, you can specify the first and last case number used to compute the
average value. If these case numbers are left blank, as in the dialog box above, all
cases are used. When you select Standard Value, you must enter a value in the
Standard u edit control.

360 Statistix 10 User’s Manual


Example We use data from Montgomery (1991, p. 181) for our example. Five personal
computers were periodically sampled from the final assembly line and inspected
for defects. The inspection unit was one computer. The sample size was five.

The total number of defects for each sample of five computers are listed below,
and are available in the file Sample Data\Computers.sx.

Case Defects Case Defects


1 10 11 9
2 12 12 5
3 8 13 7
4 14 14 11
5 10 15 12
6 16 16 6
7 11 17 8
8 7 18 10
9 10 19 7
10 15 20 5

The variable for the defect counts and the sample size are entered in the u chart
dialog box on the preceding page. The results are presented in the figure below.

The number of nonconformities per computer is plotted for each sample of five
computers. The center line is plotted at the average value for u, 1.93. The 3-sigma
lower and upper control limits are labeled on the right side at 0.07 and 3.79. The u
chart doesn’t exhibit lack of statistical control.

Chapter 14, Quality Control 361


X Bar Chart
The X Bar Chart, which plots sample averages, is used to control the process
average of a variable. A quantitative quality characteristic, such as the diameter of
a piston ring, is called a variable. This chart is normally used in conjunction with
either the R chart or the S chart to control the variability in the process.

Specification The X bar chart requires that the quality characteristic is sampled with a sample
size of at least two (use the I chart when the sample size is one). The data must be
arranged in a Statistix data set so each case represents one sample and the
individual measurements of a sample are recorded in separate variables (see the
example data on the next page). If your data are presented in a single column, you
can use the Unstack command to rearrange the data.

Move the names of the variables containing the individual measurements of the
quality characteristic to the X Bar Chart Variables list box. You must select at
least two variables, but no more than 20.

The center line and control limits are computed from the mean (mu) and standard
deviation (sigma) of the process. You can choose to enter standard or historical
values for mu and sigma, or you can have estimates computed from the data. If
you select the Compute From Data radio button, you must also select a method
for estimating sigma—the R-Bar or S-Bar method. You should choose the
method that corresponds to the control chart you’re using to control the process
variability, either the R chart or the S chart. You can also specify the First Case
and Last Case to use to compute mu and sigma. If these case numbers are left
blank, all cases are used.

362 Statistix 10 User’s Manual


If you select the Standard Value method, you must enter values in the Standard
Mu and Standard Sigma edit controls.

Check the Test Special Causes check box to have eight tests for special causes
performed. These tests are described on the next page.

Example We use data from Montgomery (1991, p. 206) for our example. Automotive piston
rings were sampled from a forge. Five rings were taken per sample. The inside
diameter of the piston rings were recorded. The data are list below, and are
available in the file Sample Data\Pistons.sx.

CASE X1 X2 X3 X4 X5
1 74.030 74.002 74.019 73.992 74.008
2 73.995 73.992 74.001 74.011 74.004
3 73.988 74.024 74.021 74.005 74.002
4 74.002 73.996 73.993 74.015 74.009
5 73.992 74.007 74.015 73.989 74.014
6 74.009 73.994 73.997 73.985 73.993
7 73.995 74.006 73.994 74.000 74.005
8 73.985 74.003 73.993 74.015 73.988
9 74.008 73.995 74.009 74.005 74.004
10 73.998 74.000 73.990 74.007 73.995
11 73.994 73.998 73.994 73.995 73.990
12 74.004 74.000 74.007 74.000 73.996
13 73.983 74.002 73.998 73.997 74.012
14 74.006 73.967 73.994 74.000 73.984
15 74.012 74.014 73.998 73.999 74.007
16 74.000 73.984 74.005 73.998 73.996
17 73.994 74.012 73.986 74.005 74.007
18 74.006 74.010 74.018 74.003 74.000
19 73.984 74.002 74.003 74.005 73.997
20 74.000 74.010 74.013 74.020 74.003
21 73.988 74.001 74.009 74.005 73.996
22 74.004 73.999 73.990 74.006 74.009
23 74.010 73.989 73.990 74.009 74.014
24 74.015 74.008 73.993 74.000 74.010
25 73.982 73.984 73.995 74.017 74.013
26 74.012 74.015 74.030 73.986 74.000
27 73.995 74.010 73.990 74.015 74.001
28 73.987 73.999 73.985 74.000 73.990
29 74.008 74.010 74.003 73.991 74.006
30 74.003 74.000 74.001 73.986 73.997
31 73.994 74.003 74.015 74.020 74.004
32 74.008 74.002 74.018 73.995 74.005
33 74.001 74.004 73.990 73.996 73.998
34 74.015 74.000 74.016 74.025 74.000
35 74.030 74.005 74.000 74.016 74.012
36 74.001 73.990 73.995 74.010 74.024
37 74.015 74.020 74.024 74.005 74.019
38 74.035 74.010 74.012 74.015 74.026
39 74.017 74.013 74.036 74.025 74.026
40 74.010 74.005 74.029 74.000 74.020

The parameters for the X bar chart are specified in the dialog box on the preceding
page. Note that cases 1 through 25 have been entered as the cases (or samples) to
use to estimate mu and sigma. The results are presented on the next page.

Chapter 14, Quality Control 363


The sample averages are plotted for each case. The values for the center line and
the 3-sigma control limits are labeled on the right side. The estimates for sigma
and expected R bar computed from the first 25 cases are given in the footnote.
Note the upward drift in the process mean, starting around case 34. The points at
cases 37, 38, and 39 exceed the UCL. The points at cases 36 and 40 are marked
with a 5. This means that these points failed test number 5: two out of three
consecutive points in zone A (beyond the 2-sigma limits).

Tests for A “special cause”, or “assignable cause”, is signaled on control charts when a point
Special is plotted outside the 3-sigma control limits. In addition to a point outside the
Causes UCL and LCL, there are other tests based on patterns of more than one point that
can be applied to the X bar and I charts. These tests are summarized by Nelson
(1984).

When these tests are selected, Statistix will indicate an out of control point using a
digit indicating the test that caused the signal. If a point fails more than one test,
the lowest digit is used to mark the point.

Test #1: A point outside the 3-sigma control limits.

Test #2: Nine points in a row on one side of the center line.

Test #3: Six points in a row, either all increasing or all decreasing.

Test #4: Fourteen points in a row, alternating up and down.

Tests #5 through #8 refer to zones A, B, and C. Zone A is the area of the chart

364 Statistix 10 User’s Manual


between the 2- and 3-sigma lines. Zone B is the area between the 1- and 2-sigma
lines. Zone C is the area between the center line and the 1-sigma line.

Test #5: Two out of three points in a


row in zone A or beyond on one side
of the center line.

Test #6: Four out of five points in a


row in zone B or beyond on one side of
the center line.

Test #7: Fifteen points in a row in zone


C on either side on the center line.

Test #8: Eight points in a row on either side of the center line, but none of them in
zone C.

By using a number of these tests at once, you increase the sensitivity of the control
chart. But you also increase the chance of a false alarm.

R Chart
The R Chart, which plots sample ranges, is used to control the process variability
of a variable. A quantitative quality characteristic, such as the diameter of a piston
ring, is called a variable. The S chart is an alternative control chart for process
variability.

Specification

Chapter 14, Quality Control 365


The R chart requires that the quality characteristic is sampled with a sample size of
at least two (use the MR chart when the sample size is one). The data must be
arranged in a Statistix data set such that each case represents one sample and the
individual measurements of a sample are recorded in separate variables. If your
data are presented in a single column, you can use the Unstack command to
rearrange the data.

Select the names of the variables containing the individual measurements of the
quality characteristic and move them to the R Chart Variables list box. You must
select at least two variables, but no more than 20.

The center line and control limits are computed from the standard deviation
(sigma) of the process. You can choose to enter a standard or historical value for
sigma, or you can have an estimate computed from the data. If you select the
Compute From Data radio button, you can specify the First Case and Last Case
to use to compute sigma. If these case numbers are left blank, all cases are used.

If you select the Standard Value method, you must enter a value in the Standard
Sigma edit control.

Example We use example data from Montgomery (1991, p. 206) for our example.
Automotive piston rings were sampled from a forge. Five rings were selected per
sample. The inside diameter of each piston ring was measured. The data are listed
on page 363, and are available from the file Sample Data\Pistons.sx.

The R chart model is specified in the dialog box on the preceding page. The
results are presented below.

366 Statistix 10 User’s Manual


The sample ranges are plotted for each case. The values for the center line and the
3-sigma control limits are labeled on the right side. The value used for sigma, in
this case estimated from the data, is given at the bottom of the R chart.

S Chart
The S Chart, which plots sample standard deviations, is used to control the
process variability of a variable. A quantitative quality characteristic, such as the
diameter of a piston ring, is called a variable. The R chart discussed earlier is
another control chart for process variability.

Specification The S chart requires that the quality characteristic is sampled with a sample size of
at least two (use the MR chart when the sample size is one). The data must be
arranged in a Statistix data set so each case represents one sample and the
individual measurements of a sample are recorded in separate variables. If your
data are presented in a single column, you can use the Unstack command to
rearrange the data.

Select the names of the variables containing the individual measurements of the
quality characteristic and move them to the S Chart Variables list box. You must
list at least two variables, but no more than 20.

The center line and control limits are computed from the standard deviation
(sigma) of the process. You can choose to enter a standard or historical value for
sigma, or you can have an estimate computed from the data. If you select the
Compute From Data radio button, you can specify the First Case and Last Case
to use to compute sigma. If these case numbers are left blank, all cases are used.

Chapter 14, Quality Control 367


If you select the Standard Value method, you must enter a value in the Standard
Sigma edit control.

Example We use data from Montgomery (1991, p. 206) for our example. Automotive piston
rings were sampled from a forge, five rings per sample. The inside diameter of the
piston ring was the quality characteristic of interest. This data set was also used as
the X Bar Chart example. The data are listed on page 363, and are available from
the file Sample Data\Pistons.sx.

The S chart model is specified in the dialog box on the preceding page. The results
are displayed below.

The sample standard deviations are plotted for each sample. The values for the
center line and the 3-sigma control limits are labeled on the right side. The value
used for sigma, in this case estimated from the data, is given at the bottom of the S
chart.

I Chart
The I Chart is the control chart for individuals used to control the
process mean. This chart is sometimes called the X chart. The
process variability is estimated from the moving range, which is the
difference between two successive observations. The MR chart is the
companion chart used to control the process variability.

368 Statistix 10 User’s Manual


Specification

Select the name of the variable that contains the individual observations and move
it to the I Chart Variable box.

The center line and control limits are computed from the mean (mu) and standard
deviation (sigma) of the process. You can choose to enter standard or historical
values for mu and sigma, or you can have estimates computed from the data. If
you select the Compute From Data radio button, you can specify the First Case
and Last Case to use to compute mu and sigma. If these case numbers are left
blank, all cases are used.

If you select the Standard Value method, you must enter values in the Standard
Mu and Standard Sigma edit controls.

Check the Test Special Causes check box to have eight tests for special causes
performed. These tests are described on page 364.

Example We use data from Montgomery (1991, p. 242) for our example. The viscosity of
batches of aircraft paint primer was measured. Because it takes several hours to
make one batch of primer, it wasn’t practical to accumulate samples of more than
one batch. The data from 30 batches of primer are listed below. The data are
available from the file Sample Data\Paint.sx.

Case Viscosity Case Viscosity Case Viscosity


1 33.75 11 33.62 21 34.80
2 33.05 12 33.00 22 34.55
3 34.00 13 33.54 23 35.00
4 33.81 14 33.12 24 34.75
5 33.46 15 33.84 25 34.50
6 34.02 16 33.50 26 34.70
7 33.68 17 33.25 27 34.29
8 33.27 18 33.40 28 34.61
9 33.49 19 33.27 29 34.49
10 33.20 20 34.65 30 35.03

Chapter 14, Quality Control 369


The dialog box on the preceding page specifies the I chart model. Cases 1-15 are
used to estimate the process mean and standard deviation. The results are
displayed below.

The individual observed values of primer viscosity are plotted for each case. Note
the shift in the process mean at case 20. Two points are plotted above the UCL.
This process is clearly out of control.

MR Chart
The MR Chart is the control chart for individuals used to control process
variability. The process variability is estimated from the moving range, the
difference between two successive observations: MR = |xi - xi-1|.

Specification The dialog box appears on the next page. Select the name of the variable that
contains the individual observations and move it to the MR Chart Variable box.

The center line and control limits are computed from the standard deviation
(sigma) of the process. You can choose to enter a standard or historical value for
sigma, or you can have an estimate computed from the data. If you select the
Compute From Data radio button, you can specify the First Case and Last Case
to use to compute sigma. If these case numbers are left blank, all cases are used.
If you select the Standard Value method, you must enter a value in the Standard
Mu box.

370 Statistix 10 User’s Manual


Example We use data from Montgomery (1991, p. 242) for our example. The viscosity of
batches of aircraft paint primer was measured. Because it takes several hours to
make one batch of primer, it wasn’t practical to accumulate samples of more than
one batch. The example uses 30 batches of primer. This data set was also used for
the I Chart example and is listed on page 369. The data are available from the file
Sample Data\Paint.sx.

The dialog box above specifies the MR chart model. Cases 1-15 are used to
estimate the process standard deviation. The results are presented below.

Chapter 14, Quality Control 371


The moving ranges are plotted for each case, starting with the second case. The
center line and 3-sigma UCL are labeled on the right side. The estimates for sigma
and expected MR bar, based on the first 15 cases, are given in the footnote.

The spike at case 20 signals the shift in the process mean observed in the example
I chart shown on page 370.

Montgomery (1991) warns that caution should be used when examining patterns in
MR charts. Because the moving ranges are correlated, runs and patterns arise
naturally in the charts.

EWMA Chart
The EWMA Chart is a control chart for variables used to control the process
mean. The value plotted on the chart is an exponentially weighted moving average
incorporating data from all previous samples as well as the sample mean itself.
This chart reacts faster than the X bar chart to small shifts in the process mean but
reacts slower to large shifts.

Specification

The EWMA chart requires that the data be arranged so that each case represents
one sample and the individual measurements of a sample are recorded in separate
variables. If your sample size is greater than one but your data are presented in a
single column, use the Unstack command to rearrange the data.

Move the names of the variables containing the individual measurements of the
quality characteristic to the EWMA Chart Variables list box. The number of
variables selected equals the sample size, which must be constant.

372 Statistix 10 User’s Manual


The center line and control limits are computed from the mean (mu) and standard
deviation (sigma) of the process. You can choose to enter standard or historical
values for mu and sigma, or you can have estimates computed from the data. If
you select the Compute From Data radio button, you must also select a method
for estimating sigma—the R-Bar or S-Bar method. You can also specify the First
Case and Last Case to use to compute mu and sigma. If these case numbers are
left blank, all cases are used.

If you select the Standard Value method, you must enter values in the Standard
Mu and Standard Sigma edit controls.

The EWMA chart requires a EWMA Weight that determines the extent to which
past observations influence the exponentially weighted moving average plotted on
the chart. A small weight results in a slow response to new values; a large weight
results in a fast response to new values. Values used for the weight are normally in
the range 0.05 - 0.25.

Example We use data from Montgomery (1991, p. 206) for our example. Automotive piston
rings were sampled from a forge, five rings per sample. The inside diameter of the
piston ring was recorded. The data are listed on page 363, and are available from
the file Sample Data\Pistons.sx.

The EWMA chart is specified in the dialog box on the preceding page. The results
are displayed below.

The process mean and 3-sigma control limits are reported on the right, and the
process standard deviation (sigma) is reported at the bottom.

Chapter 14, Quality Control 373


Tolerance Intervals
A tolerance interval is the interval in which there is a specified level of confidence
that a specified percentage of the population’s values lie, based on a sample
measured from the population. The Tolerance Intervals procedure computes one-
sided and two-sided tolerance intervals using either parametric (normal) or
nonparametric methods.

Specification

Move the names of the variables containing the individual measurements of the
quality characteristic to the Tolerance Variables list box. Enter one or more
values in the Percent of Population List box. Tolerance intervals will be reported
for each value entered. Enter a value in the C.I. Percent Coverage to specify the
confidence level for tolerance intervals.

Select an Interval Type: Two-sided, One-sided Lower, or One-sided Upper


intervals. Select a Distribution Assumption that’s appropriate for your data:
Parametric (for normally distributed data), or Nonparametric (for non-normally
distributed data).

Data When using the nonparametric method for computing tolerance intervals, the data
Restrictions need not be normal, but the data must be continuous. That is, the method is not
appropriate for categorical data.

Example We use data from Montgomery (1991, p. 206) for our example. Automotive piston
rings were sampled from a forge, five rings per sample. The inside diameter of the

374 Statistix 10 User’s Manual


piston ring was recorded. The data are listed on page 363, and are available from
the file Sample Data\Pistons.sx. For the purpose of this example, we have used the
Stack procedure to combine the variables X1 through X5 into a new variable
diameter.

In the dialog box on the preceding page, the stacked variable diameter is selected
for computing tolerance intervals. Three values were entered for the percent of
population. Intervals will be reported for each value. The results are displayed
below.

Tolerance Intervals
Parametric (Normal) Two-sided tolerance interval with 95% confidence

Percent Lower Upper


Variable N Mean Population Bound Bound
diameter 200 74.004 90 73.983 74.024
95 73.979 74.028
99 73.971 74.036

The report displays parametric two-sided tolerance intervals with 95% confidence
for three values of percent of the population. Using the values from the last line,
we can say that we are 95% confident that 99% of the population that the sample
was drawn from lies within the values 73.971 and 74.036.

Computational Computations for parametric intervals follow Howe (1969) and Guenther (1977).
Notes One-sided bounds are computed using the noncentral t distribution.
Nonparametric intervals follow Bury (1999).

Chapter 14, Quality Control 375


C H A P T E R

15
Survival Analysis
The statistical procedures in the chapter are used to analyze survival time data.
Survival time is defined as the time to the occurrence of a specific event, which
may be the development of a disease, response to a treatment, relapse, or death.
Survival analysis has been extended to fields beyond biomedical studies to include
electrical engineering, sociology, and marketing. An example of survival time in
sociology might be the duration of first marriages.

A common complication of survival data are censored observations. A censored


observation is one where the given event of interest wasn’t recorded, either
because the subject was lost to the study, or because the study ended before the
event occurred. The emphasis of the procedures in the chapter are those that can
handle censored observations.

The distribution of survival times are described using the survivorship function (or
survivor function) and the hazard function. The survivorship function S(t) is
defined as the probability that an individual survives longer than t. The hazard
function h(t) gives the conditional failure rate. It’s the probability of failure during
a small time interval assuming that the individual has survived to the beginning of
the interval.

The Kaplan-Meier procedure computes the Kaplan-Meier product limit estimates


of the survival functions. This method of estimating the survival functions can
handle censored data and doesn’t require any assumptions about the form of the
survival function. It’s appropriate for small and large data sets. The survivorship
and hazard functions can be plotted.

The Two-Sample Survival Tests procedure computes five nonparametric tests for
comparing two survival distributions: Gehan-Wilcoxon Test, Cox-Mantel Test,
Logrank Test, Peto-Wilcoxon Test, and Cox's F Test. These tests are based on the
ranks of the survival times and work for censored or uncensored observations.

The Multi-Sample Survival Tests procedure computes three nonparametric tests


for comparing three or more survival distributions: Gehan-Wilcoxon Test, Logrank
Test, and the Peto-Wilcoxon Test. These tests are extensions of the Kruskal-
Wallis test discussed in Chapter 6 and the two-sample survival tests. These tests
can be used to compare survival times for censored data.

The Mantel-Haenzel Test is used to compare survival experience between two


groups when adjustments for other prognostic factors are needed. It’s often used in

376 Statistix 10 User’s Manual


clinical and epidemiologic studies as a method of controlling the effects of
confounding variables.

The Proportional Hazards Regression procedure computes Cox’s proportional


hazards regression for survival data. It can be used to establish the statistical
relationship between survival time and independent variables or covariates
measured on the subjects. The reports include a regression coefficient table,
likelihood test for the overall model, and a variance-covariance matrix of the
coefficients. The regression coefficients can be used to compute relative risk, a
measure of the effect of a factor on a subject’s survival time.

The procedures in this chapter concentrate on survival time analyses that can
handle censored data. Many other procedures in Statistix are useful for analyzing
uncensored survival time data. These include the Wilcoxon Rank Sum Test and
Kruskal-Wallis AOV in Chapter 6, Logistic Regression in Chapter 8, and Two
By Two Tables and Spearman Rank Correlations in Chapter 11.

Kaplan-Meier
This procedure computes the Kaplan-Meier product limit estimates of the survival
functions. This method of estimating the survival functions can handle censored
data and doesn’t require any assumptions about the form of the survival function.
It’s appropriate for small and large data sets.

The procedure produces a survival function table, a percentile with confidence


limits reports, survivorship function plot, and hazard function plot.

Specification

Select the name of the variable containing the survival times and move it to the
Time Variable box. Select the variable used to indicate whether or not the event
of interest (e.g., death) occurred or not and move it to the Event Variable box.

Chapter 15, Survival Analysis 377


The event variable must be coded 0 if the event did not occur (i.e., censored
observation) and 1 if the event did occur (i.e., uncensored observation).

You can specify a grouping variable using the Group Variable box. The values of
the group variable are used to divide the survival times into groups of interest (e.g.,
treatment). If you specify a group variable, survival functions are computed
separately for each group.

The product-limit table displays confidence intervals for the survivorship function.
You can change the C.I. Percent Coverage for the confidence intervals.

Data The event variable must be an integer or real variable and may only contain the
Restrictions values 0 and 1. The group variable, if used, may be of any data type. Real values
are truncated to whole numbers. Strings are truncated to 20 characters.

Example The example data are invented remission durations for 10 patients with solid
tumors used for illustration by Lee (1992, p. 71).

Case Time Relapse


1 3.0 1
2 4.0 0
3 5.7 0
4 6.5 1
5 6.5 1
6 8.4 0
7 10.0 1
8 10.0 0
9 12.0 1
10 15.0 1

Six patients relapse at 3.0, 6.5, 6.5, 10, 12, and 15 months; 1 patient is lost to
follow-up at 8.4 months; and 3 patients are still in remission at the end of the study
after 4.0, 5.7, and 10 months. The remission times and relapse event indicators are
stored in the variables Time and Relapse.

The analysis is specified using the dialog box of the preceding page. The first
report displayed after pressing the OK button is the product-limit survival function
table displayed below.

Kaplan-Meier Product-Limit Survival Distribution


Time Variable: Time
Event Variable: Relapse

Cen- At Lower Upper


Time Died sored Risk 95% C.I. S(t) 95% C.I. SE S(t) H(t)
3.0 1 0 10 0.6137 0.9000 0.9808 0.0949 0.1054
4.0 0 1 9
5.7 0 1 8
6.5 2 0 7 0.3581 0.6429 0.8531 0.1679 0.4418
8.4 0 1 5
10.0 1 1 4 0.2066 0.4821 0.7690 0.1877 0.7295
12.0 1 0 2 0.0688 0.2411 0.5772 0.1946 1.4227
15.0 1 0 1 0.0000 0.0000 0.0000 0.0000 M

378 Statistix 10 User’s Manual


The table has one row for each distinct survival time. The column labeled “Died”
gives the number of subjects that had the event of interest recorded at the survival
time for the row. In this example, it’s the number of patients that relapsed. The
column labeled “Censored” gives the number of censored observations at the
survival time for the row. The “At Risk” column gives the number of subjects still
in the study before the survival time for the row. The values for the survivorship
function are given in the column labeled “S(t)” for each survival time that a death
(relapse in this example) occurred. The lower and upper confidence limits, and the
standard error of the survivorship function are also given. The last column gives
the value of the hazard function (H(t)) for each survival time.

Kaplan-Meier Once the analysis is computed and displayed, the Results menu icon, which
Results Menu appears as a shovel (think “dig-deeper”), on the toolbar is enabled. Click on the
icon to display the results menu, as shown below.

Select Product-Limit Table from the menu to redisplay the results presented on the
preceding page. The remaining options are discussed below.

Percentiles One of the statistics that we want to find when analyzing survival analysis data is
the median survival time. The Percentile report gives the median survival time
with confidence limits, plus the values for other percentiles that may interest you.
Select Percentiles from the Results menu and a dialog box like the one shown
below appears.

List one or more percentile values in the Percentiles Levels box. You can change
the C.I. Percent Coverage value. Press OK to display the report. The report for
the example tumor remission data is given below.

Chapter 15, Survival Analysis 379


Kaplan-Meier Survivorship Percentiles
Time Variable: Time
Event Variable: Relapse

Lower Upper
Percentile 95% C.I. Time 95% C.I.
90 3.000 5.700 6.500
75 3.000 6.500 12.000
50 6.500 9.822 M
25 10.000 11.926 M
10 12.000 13.756 M

The median remission duration for the example data is 9.822 with a 95% lower
limit of 6.500. The upper limit can’t be computed because of the insufficient
number of uncensored observations, so an M is displayed.

Survivorship Select Survivorship Function Plot from the Results menu to plot the survivorship
Function Plot function. If a group variable was specified on the Kaplan-Meier dialog box, a
separate line is plotted for each group.

The survivorship function for the example remission data is displayed below.

Hazard Select Hazard Function Plot from the Results menu to plot the hazard function. If
Function Plot a group variable was specified on the Kaplan-Meier dialog box, a separate line is
plotted for each group. The hazard function for the example remission data is
displayed on the next page.

380 Statistix 10 User’s Manual


Computational Computations follow Lee (1992). The confidence intervals are computed using the
Notes technique described by Simon and Lee (1982).

Two-Sample Survival Tests


This procedure computes five nonparametric tests for comparing two survival
distributions: Gehan-Wilcoxon Test, Cox-Mantel Test, Logrank Test, Peto-
Wilcoxon Test, and Cox's F Test. These tests are based on the ranks of the
survival times and work for censored or uncensored observations.

Specification

Chapter 15, Survival Analysis 381


Select the variable that contains the survival times and move it to the Time
Variable box. Select the variable used to indicate whether or not the event of
interest (e.g., death) occurred or not and move it to the Event Variable box. Move
the variable used to identify the two groups to the Group Variable box.

Data The event variable must be an integer or real variable and may only contain the
Restrictions values 0 and 1. The group variable may be of any data type but must have exactly
two values. Real values are truncated to whole numbers. Strings are truncated to
20 characters. Cox’s F-Test is only computed for complete or singly censored
data.

Example The example data are from Lee (1992, p. 107). Ten female patients with breast
cancer are randomized to receive either CMF (cyclic administration of
cyclophosphamide, methatrexate, and fluorouracil) or no treatment after a radical
mastectomy. The remission times recorded at the end of two years are given
below.

Case Time Relapse Trt


1 23 1 CMF
2 16 0 CMF
3 18 0 CMF
4 20 0 CMF
5 24 0 CMF
6 15 1 Control
7 18 1 Control
8 19 1 Control
9 19 1 Control
10 20 1 Control

The analysis is specified in the dialog box on the preceding page. The results are
displayed below.

Two-Sample Survival Tests


Time Variable: Time
Event Variable: Relapse
Group Variable: Trt (CMF, Control)

Gehan-Wilcoxon Test Cox-Mantel Test


W 18.000 U 2.7500
Var(W) 57.778 I 1.0875
Z 2.37 C 2.64
P 0.0179 P 0.0084

Logrank Test Peto-Wilcoxon Test


S 2.7500 S 2.1313
Var(S) 1.2106 Var(S) 0.7651
L 2.50 Z 2.44
P 0.0124 P 0.0148

All four tests provide strong evidence that the two treatments are different. The
positive sign of the test statistics (Z, C, L, and Z respectively) indicate that the first
treatment, CMF, is more effective than the second.

382 Statistix 10 User’s Manual


The Cox’s F-Test can only be used for singly censored or complete samples. The
results above to not include Cox’s F-Test because the data are progressively
censored. Consider a second example containing singly censored data. In an
experiment comparing two treatments for solid tumor, six mice are assigned to
treatment A and six to treatment B (Lee, 1992, p. 115). The experiment is
terminated after 30 days. The following survival times are recorded (+ indicates
censored observations).
Treatment A: 8, 8, 10, 12, 12, 13
Treatment B: 9, 12, 15, 20, 30+, 30+

All of the mice died except for two mice that were still alive at the end of the
study. The analysis is specified in the same manner as before. The results are
displayed below.

Two-Sample Survival Tests


Time Variable: Time
Event Variable: Died
Group Variable: Trt (A, B)

Gehan-Wilcoxon Test Cox-Mantel Test


W -24.000 U -2.8306
Var(W) 152.73 I 1.6037
Z -1.94 C -2.24
P 0.0521 P 0.0254

Logrank Test Peto-Wilcoxon Test


S -2.8306 S -2.1667
Var(S) 2.0583 Var(S) 1.0795
L -1.97 Z -2.09
P 0.0485 P 0.0370

Cox's F Test
F 0.25
DF 12, 8
P 0.0305

The results for the five different tests are similar. Lee (1992) discusses under what
circumstances one test may be more powerful than another.

Computational The computations for these tests can be found in Lee (1992).
Notes

Multi-Sample Survival Tests


This procedure computes three nonparametric tests for comparing three or more
survival distributions: Gehan-Wilcoxon Test, Logrank Test, and the Peto-
Wilcoxon Test. These tests are extensions of the Kruskal-Wallis test discussed in
Chapter 6 and the two-sample survival tests discussed in the preceding section.

Chapter 15, Survival Analysis 383


These tests can be used to compare survival times for censored data.

Specification

Select the variable that contains the survival times and move it to the Time
Variable box. Select the variable used to indicate whether or not the event of
interest (e.g., death) occurred or not and move it to the Event Variable box. Move
the variable used to identify the various groups to the Group Variable box.

Data The event variable must be an integer or real variable and may only contain the
Restrictions values 0 and 1. The group variable may be of any data type. Real values are
truncated to whole numbers. Strings are truncated to 20 characters.

Example The example data are from Lee (1992, p. 127). The remission times are listed
below, and are stored in the file Sample Data\Leukemia.sx.
Case Time Relapse Trt Case Time Relapse Trt
1 4 1 1 34 75 1 2
2 5 1 1 35 99 1 2
3 9 1 1 36 103 1 2
4 10 1 1 37 162 1 2
5 12 1 1 38 169 1 2
6 13 1 1 39 195 1 2
7 10 1 1 40 220 1 2
8 23 1 1 41 161 0 2
9 28 1 1 42 199 0 2
10 28 1 1 43 217 0 2
11 28 1 1 44 245 0 2
12 29 1 1 45 8 1 3
13 31 1 1 46 10 1 3
14 32 1 1 47 11 1 3
15 37 1 1 48 23 1 3
16 41 1 1 49 25 1 3
17 41 1 1 50 25 1 3
18 57 1 1 51 28 1 3
19 62 1 1 52 28 1 3
20 74 1 1 53 31 1 3
21 100 1 1 54 31 1 3
22 139 1 1 55 40 1 3
23 20 0 1 56 48 1 3
24 258 0 1 57 89 1 3
25 269 0 1 58 124 1 3
26 8 1 2 59 143 1 3
27 10 1 2 60 12 0 3
28 10 1 2 61 159 0 3
29 12 1 2 62 190 0 3
30 14 1 2 63 196 0 3
31 20 1 2 64 197 0 3
32 48 1 2 65 205 0 3
33 70 1 2 66 219 0 3

384 Statistix 10 User’s Manual


Three different treatments are given to leukemia patients. We’re interested in
determining whether differences in remission times exist between the three groups.
The variable Relapse indicates whether each patient relapsed or not.

The analysis is specified using the dialog box on the preceding page. The results
are shown below.

Multi-Sample Survival Tests


Time Variable: Time
Event Variable: Relapse

Gehan-Wilcoxon Test
Trt N Sum Mean
1 25 -273.00 -10.920
2 19 170.00 8.9474
3 22 103.00 4.6818
Chi-Square 3.61 DF 2 P 0.1643

Logrank Test
Trt N Sum Mean
1 25 6.6349 0.2654
2 19 -3.6934 -0.1944
3 22 -2.9415 -0.1337
Chi-Square 3.81 DF 2 P 0.1485

Peto-Wilcoxon Test
Trt N Sum Mean
1 25 4.1072 0.1643
2 19 -2.6586 -0.1399
3 22 -1.4486 -0.0658
Chi-Square 3.46 DF 2 P 0.1769

The p-values for all three tests are larger than 0.05. The data do not show
significant differences among the three treatments.

Mantel-Haenzel Test
The Mantel-Haenzel Test is used to compare survival experience between two
groups when adjustments for other prognostic factors are needed. It’s often used in
clinical and epidemiologic studies as a method of controlling the effects of
confounding variables. The data are stratified by the confounding variable and cast
into a sequence of 2 × 2 tables.

Specification The test builds a series of 2 × 2 contingency tables. The data can be presented as
raw data where each case represents one subject. Or the data can be tabulated
where each case in the data set represents one cell in the contingency table. In the
later case, a count variable is needed to supply Statistix with the counts for each
cell of the table. If your data are already tabulated, move the variable containing
the counts to the Count Variable box. If your data are not tabulated, leave the
Count Variable box empty.

Chapter 15, Survival Analysis 385


Move the categorical variable that you want to use to identify the rows of the 2 × 2
tables to the Row Variable box. Move the categorical variable that you want to
use to identify the columns to the Column Variable box. Move the categorical
variable that you want to use to identify the levels of the confounding factor to the
Strata Variable box.

Data The row, column, and strata variables may be of any data type. Real values are
Restrictions truncated to whole numbers. Strings are truncated to 20 characters. The maximum
number of strata is 500.

Example The example data are from Lee (1992). Five hundred and ninety-five people
participate in a case control study of the association of cholesterol and coronary
heart disease (CHD). Among them, 300 people are known to have CHD and 295
do not.

Case Count CHD Cholest Smoke


1 120 With CHD Elevated Smokers
2 20 W/O CHD Elevated Smokers
3 80 With CHD Normal Smokers
4 60 W/O CHD Normal Smokers
5 30 With CHD Elevated Nonsmokers
6 60 W/O CHD Elevated Nonsmokers
7 70 With CHD Normal Nonsmokers
8 155 W/O CHD Normal Nonsmokers

To find out if elevated cholesterol is significantly associated with CHD, the


investigator decides to control the effects of smoking. The study subjects are
divided into two strata: smokers and nonsmokers. The data are presented on the
preceding page, and are available in the file Sample Data\CHD.sx.

The data are already tabulated with the counts in the variable Count. The analysis
is specified on the preceding page. The results are displayed below.

386 Statistix 10 User’s Manual


Mantel-Haenzel Test
Cholest Percent
Smoke CHD Normal Elevated Total Elevated
Nonsmokers W/O CHD 155 60 215 27.9
With CHD 70 30 100 30.0
TOTAL 225 90 315 28.6

Smokers W/O CHD 60 20 80 25.0


With CHD 80 120 200 60.0
TOTAL 140 140 280 50.0

Chi-Square 16.22
DF 1
P 0.0001

The results display the frequencies for the stratified 2 × 2 tables including row and
column subtotals. The rightmost column displays the percent of the second level
for the column variable for each row. The chi-square for the Mantel-Haenzel Test
is 16.22 with an associated p-value of 0.0001. We conclude that elevated
cholesterol is significantly associated with CHD after adjusting for the effects of
smoking.

Computational The computations follow Lee (1992). We do not use the correction for continuity.
Notes

Proportional Hazards Regression


This procedure computes Cox’s proportional hazards regression for survival data.
The model assumes that individuals have hazard functions that are proportional to
one another, that is, that the ratio of the hazard functions for two individuals
doesn’t vary with time.

Proportional hazards regression is computed using the ranks of the survival times.
While it is useful for studying the relationships among the covariates, it can’t be
used to build prediction equations.

Specification The dialog box appears on the next page. Select the name of the variable
containing the survival times and move it to the Time Variable box. Select the
variable used to indicate whether or not the event of interest (e.g., death) occurred
or not and move it to the Event Variable box. The event variable must be coded 0
if the event did not occur (i.e., censored observation) and 1 if the event did occur
(i.e., uncensored observation).

Move one or more independent variables to the Independent Variables box. The
independent variables can be continuous or discrete variables. Discrete variable
must entered using indicator (0 or 1) variables.

Chapter 15, Survival Analysis 387


Data You can include up to 200 independent variables in the model. Discrete variables
Restrictions must be coded using indicator (0 or 1) variables.

Example The example survival data in the table below are from 30 patients with AML (Lee,
1992, p. 257). Two possible prognostic factors X1 and X2 are considered. X1 is
coded 1 if the patient was 50 years old or older, and 0 otherwise. X2 is coded 1 if
cellularity of marrow clot section is 100%, and 0 otherwise. The data are available
in Sample Data\AML.sx.

Case Time DIED X1 X2 Case Time DIED X1 X2


1 18 1 0 0 16 8 1 1 0
2 9 1 0 1 17 2 1 1 1
3 28 0 0 0 18 26 0 1 0
4 31 1 0 1 19 10 1 1 1
5 39 0 0 1 20 4 1 1 0
6 19 0 0 1 21 3 1 1 0
7 45 0 0 1 22 4 1 1 0
8 6 1 0 1 23 18 1 1 1
9 8 1 0 1 24 8 1 1 1
10 15 1 0 1 25 3 1 1 1
11 23 1 0 0 26 14 1 1 1
12 28 0 0 0 27 3 1 1 0
13 7 1 0 1 28 13 1 1 1
14 12 1 1 0 29 13 1 1 1
15 9 1 1 0 30 35 0 1 0

The analysis is specified in the dialog box above. The results are displayed on the
next page.

The coefficient table lists the regression coefficients, standard errors of the
coefficients, z-statistics (coefficient/standard error), p-values, and relative risks
(ecoefficient) for each independent variable. The z-statistic and the associated p-value
tests the null hypothesis that the coefficient equals zero. The test for the
independent variable X1 is significant at the .05 level. The positive sign for the
coefficient indicate that the older patients have a higher risk of dying. The

388 Statistix 10 User’s Manual


estimated risk of dying for patients at least 50 years of age is 2.75 times higher
than that for patients less than 50.

Proportional Hazards Regression

Time Variable: Time


Event Variable: Died

Variable Coefficient Std Error Z P Rel Risk


X1 1.01317 0.45740 2.22 0.0268 2.75
X2 0.35025 0.43917 0.80 0.4252 1.42

Log Likelihood, No Variables -65.83


Log Likelihood, Model -63.12
Chi-Square, Overall Model 5.43
DF 2
P 0.0663

The statistics below the list of regression coefficients are used to test the fit of the
overall model. The chi-square value given is called the likelihood test. It tests
whether the independent variables in the model contribute to the prediction of
survivorship. The test for this example is almost significant at the .05 level.

Variance- The results menu for proportional hazards regression includes an option to obtain
Covariance of the variance-covariance matrix of the regression coefficient estimates. The matrix
Betas for the AML example are displayed below.

Variance-Covariance Matrix for Coefficients

X1 X2
X1 0.20921
X2 0.16525 0.19287

Computational Computations follow Kalbfleisch and Prentice (1980).


Notes

Chapter 15, Survival Analysis 389


C H A P T E R

16

Power Analysis
Much of what we do with statistics is test the null hypothesis that a treatment has
no effect. We choose a small value for the significance criterion, alpha (say, 0.05),
to limit rejecting the null hypothesis when it in fact true (Type I error). Power
analysis is concerned with Type II errors: accepting the null hypothesis when it is
in fact false. When designing experiments, it’s important that there be sufficient
power to detect meaningful effects when they exist.

The power of a test of a null hypothesis is the probability that it will lead to the
rejection of the null hypothesis. Power is influenced by three parameters:
Alpha: the statistical significance criterion used in the test.
Effect size: the magnitude of the effect of interest.
Sample size.

To compute power using the procedures described in this chapter, you’ll supply
values for alpha, effect size, and sample size. When performing power analysis,
we often want to determine the sample size required to obtain a powerful test. You
can use Statistix to solve for sample size given target values for power and effect
size. Many of the power analysis described here will also solve for effect size
given values for power and sample size.

You’ll need to supply estimates for other parameters as well, depending on the test.
For example, you’ll need to provide estimates of variability (standard deviations)
for tests of means. These estimates may come from prior experiments or pilot
studies conducted specifically to obtain the estimates.

The dialog boxes used for power analysis have some common characteristics. The
top of the dialog boxes have three tabs: Report, Table, and Plot. Selecting one of
these tabs reveals a different version of the dialog box used to produce either a
report, table, or plot. A Report is used to compute a single value for power, sample
size, or effect size. Select Table to produce a table of computed values (power,
sample size, or effect size) for equally-spaced values of one or two input
parameters. Select Plot to produce X-Y line plots of power, effect size, or sample
size.

At a minimum, power should be greater than 0.50. When power is less than 0.50,
you have a greater chance of failing to detect an effect than succeeding. The
consensus is that power of 0.80 is usually adequate (Murphy and Myors, 2004).

390 Statistix 10 User’s Manual


Higher power is desirable, but must be balanced against the cost of obtaining larger
samples.

Power analysis is intended to be used while designing studies. Be careful not to


use power analysis retrospectively to draw additional conclusions about a study
that’s been completed. Some programs report “observed power” as part of the
output of a test: the power of detecting the observed effect with the sample size
used. Values for observed power don’t add any information to the p-value of the
test (Hoenig and Heisey, 2001). If the test was nonsignificant, then it follows that
the power was inadequate to detect the observed difference. If the test was
significant, then the power was adequate.

One-Sample/Paired T Test
This procedure performs power analysis for either the one-sample test or the paired
t test. The one-sample t test is used to test the hypothesis that a mean is equal to a
particular value (or, that the difference between the actual and hypothesized mean
is zero). The paired t test is used to test the hypothesis that the mean of the
differences of paired values is zero.

Specification

First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. The dialog above shows an example of a Report dialog.
The Row Variable and Column Variable options are disabled because they’re used
for the Table dialog, not the Report dialog. Next make a selection for Solve for:
Power, Effect Size, or Sample Size. The dialog above shows an example of
solving for power, so the box used to enter a value for power is disabled because it
won’t be used.

Chapter 16, Power Analysis 391


Next, select the Alternative Hypothesis you intend to use when analyzing the data
for the experiment: One-sided Test or Two-sided Test. Enter a value for Alpha:
the statistical significance criterion used in the test. Enter a value for Sigma: an
estimate of the standard deviation from which the sample will be drawn.

When solving for sample size or power, you’ll need to specify a value for effect
size. Enter a value for the Difference: the difference you’d like to be able to detect
between the hypothesized mean and the observed mean. When solving for power
or effect size, you’ll need to specify a value for Sample Size. When solving for
effect size or sample size, you’ll need to specify a value for Power. Press OK to
display the report.

Example The One-Sample T Test procedure in Chapter 6 used an example of ten frozen
dinners labeled “200 calories” randomly selected from a day’s production at a
factory. The caloric content of the dinners were measured at 198, 203, 223, 196,
202, 189, 208, 215, 218, 207. The object of the analysis was to determine if the
mean caloric content was significantly different from the hypothesized value of
200.

Now let’s suppose we’re interested to taking additional samples in the future, and
we’d like to determine the power of detecting a difference of 10 calories or more
from a sample of 10 dinners. The original report gave a standard error of the mean
of 3.32892. From that we can calculate an estimate of the population standard
deviation as 3.32892 × %10 = 10.52. These values are entered in the dialog box on
the preceding page.

The results are displayed below. The power of detecting a difference of 10 or


more calories using the two-tailed one-sample t test with a value of 0.05 for alpha,
a population standard deviation of 10.52, and a sample size of 10 is 0.763.

Power Analysis for One-Sample/Paired T Test

Solve for Power

Given
Two-tailed Test
Alpha 0.05
Sigma 10.52
Difference 10
Sample Size 10

Results
Power 0.763
Critical T Value 2.26
DF 9
NCP (δ) 3.006

If the null hypothesis that the mean calorie content equals 200 (and that the
difference from 200 is zero) is correct, then means of samples will follow the
central t-distribution. The noncentrality parameter (customarily referred to as δ for

392 Statistix 10 User’s Manual


a t-distribution) for the central t-distribution is zero. If the null hypothesis is
incorrect and the mean difference is not zero, the samples will follow a noncentral
t-distribution and delta will be nonzero. The greater the mean difference, the
greater the noncentrality parameter, and consequentially the greater the power of
detecting a difference. The noncentrality parameter is calculated as: δ = d %n / σ,
where d is the true difference, n is the sample size, and σ is the population standard
deviation (Winer et al., 1991).

The power is computed using the probability function for Student’s t distribution
discussed in Chapter 17. The format is T(X, DF, δ), where X is the critical t value,
DF is the degrees of freedom, and δ is the noncentrality parameter. We find that
the lower tail for T(-2.26, 9, 3.006) = 0.000 and the upper tail for T(2.26, 9, 3.006)
= 0.763. We sum these two values to find the power: 0.000 + 0.763 = 0.763. This
is illustrated in the plots of the two relevant t-distributions below.

The first plot is the central t-distribution used to test the null hypothesis. The
shaded area marks the rejection zone: t statistics less than -2.26 and greater than
2.26. The shaded area is alpha = 0.05. If the null hypothesis were true, we’d reject
it 5% of the time.

The second plot is the noncentral t-distribution for δ = 3.006. It shows the
distribution of t statistics we’d expect if the true mean differed from the
hypothesized mean by 10. Again, the shaded area marks the rejection zone: t
statistics less than -2.26 and greater than 2.26. The shaded area is power = 0.763.
If the null hypothesis were false and the true mean differed from the hypothesized
mean by 10, we’d reject the null hypothesis 76.3% of the time.

Statistix can also determine the sample size required to obtain a desired level of

Chapter 16, Power Analysis 393


power. Continuing with the frozen dinner example, let’s determine the sample size
required to obtain power of 0.90 for detecting a difference of 10 calories. The
analysis is specified in the dialog box below.

The results are displayed below.

Power Analysis for One-Sample/Paired T Test

Solve for Sample Size

Given
Two-tailed Test
Alpha 0.05
Sigma 10.52
Difference 10
Power 0.90

Results
Sample Size 14
Actual Power 0.907
Critical T Value 2.16
DF 13
NCP (δ) 3.557

A sample size of 14 frozen dinners is required to obtain power of 0.90 of detecting


a difference of 10 calories using the two-tailed one-sample t test using a value of
0.05 for alpha and a population standard deviation of 10.52.

The actual power of 0.907 is included in the report. The actual power is often
larger than the target value because sample sizes are discrete numbers.

Now let’s suppose we want to know how large a difference we can expect to detect
with a sample size of 10 and still obtain a value of 0.90 for power. This analysis is
specified in the dialog box below.

394 Statistix 10 User’s Manual


The results are displayed below.

Power Analysis for One-Sample/Paired T Test

Solve for Effect Size

Given
Two-tailed Test
Alpha 0.05
Sigma 10.52
Sample Size 10
Power 0.90

Results
Difference 12.20
Critical T Value 2.26
DF 9
NCP (δ) 3.667

This result says that we can detect a difference of 12.20 calories with a sample size
of 10 frozen dinners and power of 0.90.

Two-Sample T Test
This procedure performs power analysis for the two-sample test. The null
hypothesis being tested in the two-sample t test is that the means of the two
populations from which the samples are drawn are equal. The analysis can handle
equal and unequal variances, and equal or unequal sample sizes.

Specification First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. The dialog box on the next page shows an example of a

Chapter 16, Power Analysis 395


Report dialog.

Next make a selection for Solve for: Power, Effect Size, or Sample Size. The
dialog above shows an example of solving for sample size, so the boxes used to
enter sample sizes are disabled because they won’t be used.

The analysis can be performing assuming that the two populations being sampled
have equal or unequal variances. Make your selection from the Variance
Assumption list. Next, select the Alternative Hypothesis you intend to use when
analyzing the data for the experiment: One-sided Test or Two-sided Test.

When assuming equal variances, enter a value for Sigma 1: an estimate of the
standard deviation from which the samples will be drawn. For unequal variances,
enter separate estimates of the standard deviations for the two populations in
Sigma 1 and Sigma 2.

When solving for sample size or power, you’ll need to specify a value for effect
size. Enter a value for the Difference: the difference you’d like to be able to detect
between the two means. When solving for power or effect size, you’ll need to
specify a value for Sample Size. When solving for effect size or sample size,
you’ll need to specify a value for Power. Press OK to display the report.

Example The dialog box above illustrates how to solve for sample size assuming unequal
variances. The values 2 and 4 have been entered as estimates of the population
standard deviations. The mean difference we’d like to be able to detect is 2 at a

396 Statistix 10 User’s Manual


power level of 0.80. The results are displayed below.

Power Analysis for Two-Sample T Test

Solve for Sample Size

Given
Assume Unequal Variances
Two-tailed Test
Alpha 0.05
Sigma 1 2.0
Sigma 2 4.0
Difference 2.0
Power 0.80

Results
Sample Size 1 24
Sample Size 2 49
Total Sample Size 73
Actual Power 0.802
Critical T Value 1.99
DF 70.9
NCP (δ) 2.848

The total sample size required to obtain power of 0.80 for detecting a mean
difference of 2.0 using the two-tailed two-sample t test with a value of 0.05 for
alpha, and population standard deviations of 2.0 and 4.0 is 73. Note that the
resulting sample sizes are different for the two samples. These are the optimal
sizes for the two samples that minimizes the total sample size, obtained by setting
the ratio n1/n2 equal to σ1/σ2, which minimizes the standard error of the
difference of the sample means.

Example
Table

Chapter 16, Power Analysis 397


The dialog box above illustrates how to compute a table of sample sizes needed to
achieve a range of values for effect size and power. First, click on the Table tab to
display the dialog for tables. Select a Row Variable and optionally a Column
Variable. In the example, Effect Size and Power have been selected. The options
for the row and column variables are: power, sample size, effect size, sigma 1,
sigma 2, and alpha.

A range of values is specified for the row and column variables (difference and
power in the example) using three values: low, high, and step. The row variable is
limited to 100 intervals. The column variable is limited to six intervals. The
results are displayed below.

Power Analysis for Two-Sample T Test

Solve for Sample Size

Given
Assume Unequal Variances
Two-tailed Test
Alpha 0.05
Sigma 1 2.0
Sigma 2 4.0

Power
Difference 0.80 0.90
1.0 285 381
2.0 73 97
3.0 34 45
4.0 20 26
Samples sizes reported are total sample sizes.

This table can be displayed graphically by returning to the dialog box and clicking
on the Plot tab, and then pressing OK.

Analysis of Variance
Statistix can perform power analysis for a number of analysis of variance designs:
one-way, randomized complete block, Latin square, factorial, split-plot, strip-plot,
split-split-plot, and repeated measures. It offers two approaches: power of the F
test and power of multiple comparisons.

Power of the F The F test in analysis of variance tests the null hypothesis that all treatment means
Test are equal. We use the standard deviation of the effect as the measure of the effect
size. For a main effect, it’s also called the standard deviation of the means. When
the null hypothesis, the standard deviation is zero. When the null hypothesis is
false, the standard deviation of an effect will be a positive number.

The standard deviation of the effect is the square root of the sum of squares of the

398 Statistix 10 User’s Manual


effects divided by the degrees of freedom. For example, consider a two-way
model where factor A has a levels and factor B has b levels:
Yijk = µ + αi + βj + αβij + σijk

the effect sizes are:


σA = [3αi2/(a-1)]-2 = [3(µi!µ)2/(a-1)]-2
σB = [3βj2/(b-1)]-2 = [3(µj!µ)2/(b-1)]-2
σAB = [33αβij 2/((a-1)(b-1))]-2 = [33(µij-µi-µj+µ)2/((a-1)(b-1))]-2

Whenever you see a box to enter a value for the SD of a main effect, you’ll see an
Estimate button next to it. For example, in the dialog box for power of the one-
way AOV, you’ll see this:

You can enter a number in the box, or press the Estimate button for alternative
methods at arriving at a value. An example Estimate dialog box is displayed on
the next page.

There are three methods to estimate the SD for a main effect. The first method is
to specify the Range and Mean Pattern. The Range is the difference between the
largest mean and smallest mean. There are three choices for Mean Patterns:
Minimum SD, Equally Spaced, and Maximum SD. The minimum SD pattern
arises when all the means other than the smallest and largest means are equal to the
overall mean (e.g., 70, 80, 80, 80, 90). It leads to the most conservative power
estimates and should be used when you have no information about the likely
pattern. The maximum SD arises when all the means are equal to either the
smallest or the largest (e.g., 70, 70, 70, 90, 90). It leads to the most liberal power
estimates. An equally spaced pattern (e.g., 70, 75, 80, 85, 90) leads to intermediate
SD values.

The second method of estimating the SD of a main effect is List Means. You
enter the list of hypothesized means in the Means box (e.g., 70 80 90). You can

Chapter 16, Power Analysis 399


also enter a list of hypothesized effects (e.g., -10 0 10). Enter one value for each
level for the main effect as specified in the main power dialog box. You can use a
repeat factor when entering a long list of means. For example, entering 3(80) in
the list is equivalent to entering 80 80 80.

The third method of estimating the SD of a main effect is Proportion of Variance.


The proportion of variance (written PV or η2) is the proportion of total variance
made up by the variance of treatment means:
PV = σmean2/σtotal2 = σmean2/(σerror2+σmean2)
You enter a proportion in range 0 to 1 in the space provided.

When you click OK, the SD for the main effect is calculated using the values you
provided and is entered in the SD box.

Whenever you see a box to enter a value for the SD of a two-factor interaction,
you’ll see an Estimate button next to it. For example, in the dialog box for power
of the factorial design, you’ll see this:

You can enter a number in the box, or press the Estimate button for help at
determining a value. An example Estimate dialog box for a 2×3 interaction A(B
is displayed below.

Enter a list of hypothesized means for the interactions in the cells provided. Rather
than entering means, you can enter interaction effects as was done in the example.
Using effects rather than means can make it easier to visualize the nature of the
interaction. When entering effects, rows and columns must sum to zero.

When you click OK, the SD for the main effect is calculated using the values you
provided and is entered in the SD box.

We use the F test F = MSeffect/MSerror to test for differences among means in


analysis of variance. If the null hypothesis that the means are equal is correct, then
the F statistic will follow the central F-distribution. The noncentrality parameter λ
for the central F-distribution is zero. If the null hypothesis is incorrect and the

400 Statistix 10 User’s Manual


means are not equal, the F statistic will follow a noncentral F-distribution and λ
will be positive. The greater the departure from mean equality, the greater the
noncentrality parameter, and consequentially the greater the power.

Using the values you enter into a dialog box, Statistix calculates λ = dfeffect (Fa - 1)
where Fa is the F statistic if the alternative hypothesis is correct and dfeffect is the
numerator degrees of freedom of the F test (Liu and Raudenbush, 2004). Fa is
constructed from the expected mean squares in the usual manner. For a balanced
one-way AOV with treatment factor A with a levels and a per-treatment sample
size of n, Fa = (σ2 + nσ2A) / σ2 and λ = (a-1) (Fa - 1).

Power is calculated using the probability function for the F distribution using the
critical F value and degrees of freedom that would be used for the F test, and λ:
Power = 1 - F(Fcritical, df1, df2, λ).

Power of You can also study the power of multiple comparisons of means, which tests the
Multiple null hypothesis two of the means are equal. This is perhaps more useful than
Comparisons studying the power of the F test. When an F test is significant, we generally want
to know which means differ from one another. We can use one of the all-pairwise
comparison methods (Tukey’s, Bonferroni’s, etc.), or Dunnett’s comparisons with
a control, or Hsu’s comparisons with the best.

Reports for multiple comparisons provide a critical value for comparisons. It’s the
minimum difference between two means required for significance. The effect size
in power analysis is the Detectable Difference: the difference between two means
that an experiment will be able to detect with a target value of power.

To start, select Multiple Comparisons from the Study Power of list box. Make a
selection for Solve for: Power, Effect Size, or Sample Size. You’ll also select a
MC Method: the multiple comparison method you intend to use. The comparison
methods available are: Tukey’s HSD, LSD, Scheffe, Sidak, Bonferroni, Dunnett’s
comparison with a control, and Hsu’s comparison with the best. These methods
are discussed in Chapter 9.

When solving for power or sample size, you’ll need to specify an effect size. The
effect size used for multiple comparisons is the Detectable Difference: the mean
difference you want to be able to detect using multiple comparisons.

When solving for power or effect size, you’ll need to specify a value for sample
size. Enter the number per cell.

When solving for effect size or sample size, you’ll need to specify a value for
power.

The method for computing power is outlined in Bausell et al. (2002), except that
Statistix uses exact methods based on noncentral t-distributions rather than

Chapter 16, Power Analysis 401


approximate methods. The noncentrality parameter is calculated as δ = d / SEdiff,
where d is the detectable difference and SEdiff is the standard error of the difference
of two means. Power is calculated using the probability function for the t
distribution using the critical t value and degrees of freedom that would be used for
the t test, and δ: Power = T(-tcritical, df, δ) + (1 - T(tcritical, df, δ)).

One-Way AOV
This procedure performs power analysis for the balanced one-way AOV, also
know as the completely randomized design. You can study the power of the F test
and the power of multiple comparisons of means.

Specification

First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. The dialog box above shows an example of a Report
dialog.

Next make a selection for Study Power of: F test or Multiple Comparisons. Make
a selection for Solve for: Power, Effect Size, or Sample Size. Enter a value for
Alpha: the statistical significance criterion used in the test.

Enter a value for SD(Error): the standard deviation due to random error among
observations within the same treatment. If you have the results from a similar one-
way AOV, you can use the square root of the error mean square as an estimate.

402 Statistix 10 User’s Manual


When solving for power or sample size, you’ll need to specify an effect size. For
the F test, the effect size is SD(Treatment): the standard deviation of the treatment
means. Press the Estimate button for help at arriving at a value. For multiple
comparisons, the effect size is Detectable Difference: the mean difference you
want to be able to detect using multiple comparisons. You’ll also select a MC
Method: the multiple comparison method you intend to use.

When solving for power or effect size, you’ll need to specify a value for Sample
Size. Enter a number per treatment.

When solving for effect size or sample size, you’ll need to specify a value for
Power. Press OK to display the report.

Example The dialog box on the preceding page illustrates how to solve for power of the F
test for a one-way AOV with 5 treatments and a sample size of 6 per treatment. A
value of 10 was entered for SD(Error): the random variation within a treatment.
The SD(Treatment) value of 0.70711 was calculated using the Estimate dialog box
using the Range and Mean Pattern method. A value of 20 was entered for range
and the minimum SD pattern selected. The results are displayed below.

Power Analysis for One-Way AOV

Study Power of F Test


Solve for Power

Given
Alpha 0.05
Number of Treatments 5
SD(Error) 10
Range 20
Pattern Minimum SD
SD(Treatment) 7.0711
Sample Size (per trt) 6
Total Sample Size 30

Results
Power 0.714
Critical F Value 2.76
DF 4,25
NCP (λ) 12.00

The report displays the method used to estimate SD(Treatment), in this case a
range of 20 and minimum SD mean pattern. The critical F value used to test
significance for this model is 2.76 with degrees of freedom 4 and 25. The power
computed using 12.00 for λ is 0.714.

Example Plot The dialog box on the next page illustrates how to plot the power for sample sizes
ranging from 3 to 12 for three possible effect sizes. The Solve for variable (i.e.,
power, sample size, or effect size) is plotted on the Y axis. There are five options
for the X Axis Variable and Lines Variable: power, sample size, effect size,

Chapter 16, Power Analysis 403


SD(Error), and alpha. If specified, the values for the Lines Variable is used to plot
separate connected lines on the same plot.

A range of values is specified for the X Axis Variable and Lines Variable using
three values: low, high, and step. The X Axis Variable is limited to 100 intervals.
The Lines Variable is limited to six intervals. The results are displayed below.

This plot can be displayed as a table by returning to the dialog box and clicking on
the Table tab, and then pressing OK.

404 Statistix 10 User’s Manual


Randomized Complete Block Design
This procedure performs power analysis for the balanced RCB design. The
experimental error is reduced by removing variation between blocks. You can
study the power of the F test and the power of multiple comparisons of means.

Specification

First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. The dialog box above shows an example of a Report
dialog.

Next make a selection for Study Power of: F test or Multiple Comparisons. Make
a selection for Solve for: Power, Effect Size, or Sample Size. Enter a value for
Alpha: the statistical significance criterion used in the test.

Enter a value for SD(Error): the standard deviation due to random error among
observations within the same treatment and the same block. If you have the results
from a similar RCB design, you can use the square root of the error mean square as
an estimate.

When solving for power or sample size, you’ll need to specify an effect size. For
the F test, the effect size is SD(Treatment): the standard deviation of the treatment
means. Press the Estimate button for help at arriving at a value. For multiple
comparisons, the effect size is Detectable Difference: the mean difference you
want to be able to detect using multiple comparisons. You’ll also select a MC
Method: the multiple comparison method you intend to use.

Chapter 16, Power Analysis 405


When solving for power or effect size, you’ll need to specify a value for Sample
Size. Enter a number of blocks. When solving for effect size or sample size,
you’ll need to specify a value for Power. Press OK to display the report.

Example The dialog box on the preceding page illustrates how to find the sample size
required to obtain power of 0.80 of detecting a mean difference of 12 using
Tukey’s HSD method of all-pairwise multiple comparisons using a RCB design
with 4 treatments. A value of 8 was entered for SD(Error): the random variation
within a treatment and block combination. The results are displayed below.

Power Analysis for RCB Design

Study Power of Multiple Comparisons


Solve for Sample Size

Given
Alpha 0.05
Number of Treatments 4
SD(Error) 8.0
MC Method Tukey HSD
Detectable Difference 12
Power 0.80

Results
Sample Size (blocks) 12
Total Sample Size 48
Actual Power 0.826
Critical T Value 2.71
DF 33
NCP (δ) 3.674

Twelve blocks are required to achieve power of 0.80. The total sample size
required is 48. Tukey’s critical quantile Q is converted to T = Q/%2 so the
probability function for the noncentral t distribution can be used to solve for
power.

Latin Square Design


This procedure performs power analysis for the balanced Latin square design. The
experimental error is reduced by removing variation between rows and columns.
You can study the power of the F test and the power of multiple comparisons of
means.

Specification First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. The dialog box above shows an example of a Report
dialog.

406 Statistix 10 User’s Manual


Next make a selection for Study Power of: F test or Multiple Comparisons. Make
a selection for Solve for: Power, Effect Size, or Sample Size. Enter a value for
Alpha: the statistical significance criterion used in the test.

Enter a value for SD(Error): the standard deviation due to random error among
observations within a cell (a single combination of row, column, and treatment). If
you have the results from a similar Latin square design, you can use the square root
of the error mean square as an estimate.

When solving for power or sample size, you’ll need to specify an effect size. For
the F test, the effect size is SD(Treatment): the standard deviation of the treatment
means. Press the Estimate button for help at arriving at a value. For multiple
comparisons, the effect size is Detectable Difference: the mean difference you
want to be able to detect using multiple comparisons. You’ll also select a MC
Method: the multiple comparison method you intend to use.

When solving for power or effect size, you’ll need to specify a value for Sample
Size. Enter a number of complete squares. The number of squares for a simple
Latin square design is 1. When more than one square is specified, squares are
added to the model as an additional level of blocking.

When solving for effect size or sample size, you’ll need to specify a value for
Power. Press OK to display the report.

Example The dialog box above illustrates how to find the mean difference that one would be
able to detect using Dunnett’s multiple comparisons with a control in a Latin
square design with 5 treatments, SD(error) of 4.0, and power of 0.80. The results

Chapter 16, Power Analysis 407


are displayed below.

Power Analysis for Latin Square


Design

Study Power of Multiple Comparisons


Solve for Effect Size

Given
Alpha 0.05
Number of Treatments 5
SD(Error) 4.0
MC Method Dunnett
Sample Size (squares) 1
Total Sample Size 25
Power 0.80

Results
Detectable Difference 9.400
Critical T Value 2.81
DF 12
NCP (δ) 3.716

The resulting detectable difference using Dunnett’s test is 9.4. The critical t value
for the four comparisons is 2.81. The degrees of freedom for the test is 12. The
noncentrality parameter for the t distribution based on the difference of 9.4 with
SD(Error) of 4.0 is 3.716.

Factorial Design
This procedure performs power analysis for the balanced factorial design.
Replication, if any, is accomplished my assigning multiple subjects to cells, a cell
being a unique combination of the factors. The next section covers factorial design
in blocks where replication is accomplished by adding blocks. You can study the
power of the F test and the power of multiple comparisons of means.

Specification First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. The dialog box above shows an example of a Report
dialog. Next make a selection for Study Power of: F test or Multiple
Comparisons. Make a selection for Solve for: Power, Effect Size, or Sample Size.
Enter a value for Alpha: the statistical significance criterion used in the test.

Specify the details of the factorial design using the next three options. Enter a
value for Number of Factors. Then list the number of levels for each factor in the
Factor Levels box. Make a selection for Interaction Terms: no interactions, up to
2-way, etc. The model specified in the dialog box on the preceding page is a
2×3×4 factorial that includes the main effects and all two-factor interactions:
Yijkl = µ + αi + βj + &k + αβij + α&ik + β&jk+ εijkl

408 Statistix 10 User’s Manual


The letters A, B, C, etc. are automatically assigned to the factors. There are
several terms in the model that you may want to estimate power for, but you can
only analyze one at a time. Select a term from the Term of Interest drop-down
list. For the model specified above, the terms listed are A, B, C, A(B, A(C, and
B(C.

Enter a value for SD(Error): the standard deviation due to random error among
observations within a cell (a single combination of all factors). If you have the
results from a similar factorial design, you can use the square root of the error
mean square as an estimate.

When solving for power or sample size, you’ll need to specify an effect size. For
the F test, the effect size is SD(Effect): the standard deviation of the effect of
interest. The effect of interest may be a main effect or an interaction. Press the
Estimate button for help at arriving at a value. For multiple comparisons, the
effect size is Detectable Difference: the mean difference you want to be able to
detect using multiple comparisons. You’ll also select a MC Method: the multiple
comparison method you intend to use.

When solving for power or effect size, you’ll need to specify a value for Sample
Size. Enter a number of observations per cell (a unique combination of the
factors). When solving for effect size or sample size, you’ll need to specify a value
for Power. Press OK to display the report.

Chapter 16, Power Analysis 409


Example The dialog box on the preceding page illustrates how to determine the sample size
required for power of 0.80 of detecting a mean difference of 25 for the main effect
C using Tukey’s HSD all-pairwise comparisons. The model is a 2×3×4 factorial
design that includes all two-factor interactions. The A(B(C term is not in the
model so it will be pooled with the error term. The SD(Error) is estimated to be
20.

The results are displayed on the next page. The results show that a sample size of
3 per cell is required. Since there are 24 cells in a 2×3×4 factorial design, the total
sample size is required is 72.

The report also includes the actual power of detecting the specified mean
difference. It’s calculated using the probability function for the t distribution using
the critical t statistic for the test, and the error degrees of freedom:
actual power = T(2.65, 54, 3.75, 0) = 0.859

Study Power of Multiple Comparisons


Solve for Sample Size

Given
Alpha 0.05
Number of Factors 3
Factor Levels 2 3 4
Interaction Terms Up to 2-way
Term of Interest C
SD(Error) 20
MC Method Tukey HSD
Detectable Difference 25
Power 0.80

Results
Sample Size (per cell) 3
Total Sample Size 72
Actual Power 0.859
Critical T Value 2.65
DF 54
NCP (δ) 3.750

The actual power of 0.859 is considerably higher than the target value of 0.80.
Let’s consider reducing sample size per cell from 3 to 2 (reducing total sample size
from 72 to 48). By solving for power for a sample size per cell of 2, we find that
the power is only 0.635, well below the target value.

Factorial in Blocks
This procedure performs power analysis for the balanced factorial design in
randomized blocks. Increasing sample size is accomplished by adding additional
blocks. Blocks is included in the model and the variation between blocks is
removed from the error term. This is in contrast to the factorial design discussed

410 Statistix 10 User’s Manual


in the previous section, where there are no blocks and sample size is increased by
adding additional experimental units to cells. You can study the power of the F
test and the power of multiple comparisons of means.

Specification

Specifying the analysis proceeds the same as for the factorial design discussed at
length in the previous section. The difference is that sample size is expressed as
blocks rather than as observations per cell. The model for factorial in blocks
includes a term for blocks, which reduces the degrees of freedom for error.

Split-Plot Design
This procedure performs power analysis for the balanced split-plot design. You
can study the power of the F test and the power of multiple comparisons of means.

Specification First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. The dialog box on the next page shows an example of a
Report dialog.

Chapter 16, Power Analysis 411


Next make a selection for Study Power of: F test or Multiple Comparisons. Make
a selection for Solve for: Power, Effect Size, or Sample Size. Enter a value for
Alpha: the statistical significance criterion used in the test.

A split-plot design has two factors, the main plot factor and the subplot factor.
Enter the number of levels for the main plot factor in the Main Levels box. Enter
the number of levels for the sub-plot factor in the Subplot Levels box.

There are three terms in the split-plot design that you may want to estimate power
for: Main, Subplot, and Main(Subplot. You can only analyze one term at a time.
Select a term from the Term of Interest drop-down list.

The split-plot design has two error terms and two components of error. Consider
the table of expected mean squares for a balanced split-plot design with r
replicates, a levels for main plot factor A, and b levels for subplot factor B.

Source DF Expected Mean Square


Replicate r-1 σ2 + bσ2 RA + abσ2 R
A a-1 σ2 + bσ2 RA + rbΣαi2 /(a-1)
Error(Rep(A) (r-1)(a-1) σ2 + bσ2 RA
B b-1 σ2 + raΣβj2 /(b-1)
A(B (a-1)(b-1) σ2 + rΣΣ(αβ)ij2 /(a-1)(b-1)
Error(Rep(A(B) a(b-1)(r-1) σ2
Total rab-1

412 Statistix 10 User’s Manual


The two components of error are denoted σ2RA and σ2 in the table above. Since
σ2RA appears in the expected mean square for the main plot factor A, you’ll need to
provide an estimate for it for evaluate power for the main plot term. It’s also
required for multiple comparisons for the interaction term because it’s a
component of the standard error for the difference of means at different levels of
the main plot factor. Enter an estimate for σRA in the SD(Rep(Main) box and an
estimate of σ in the SD(Error) box. If you have the results from a similar split-
plot design, you can compute an estimate for σRA using the mean squares from the
AOV table:
SD(Rep(Main) = [(MSRep(Main - MSError) / b]-2
where b is the number of subplot levels used in the prior experiment (not in the
proposed experiment). You can use the square root of MSError from an earlier
experiment as an estimate for SD(Error).

When solving for power or sample size, you’ll need to specify an effect size. For
the F test, the effect size is SD(Effect): the standard deviation of the effect of
interest. The effect of interest may be a main effect or an interaction. Press the
Estimate button for help at arriving at a value. For multiple comparisons, the
effect size is Detectable Difference: the mean difference you want to be able to
detect using multiple comparisons. You’ll also select a MC Method: the multiple
comparison method you intend to use.

When solving for power or effect size, you’ll need to specify a value for sample
size. Enter a number of Replicates (that is, a number of main plots). When
solving for effect size or sample size, you’ll need to specify a value for Power.
Press OK to display the report.

Example The dialog box on the preceding page illustrates how to determine the power of the
F test for the main plot factor in a split-plot design with 4 main plot levels, 4
subplot levels, and 5 replicates. Estimates for SD(Rep(Main) and SD(Error) were
calculated using the mean squares from the alfalfa example split-plot on page ?.
SD(Rep(Main) = (0.13623 - 0.02797) / 4]-2 = 0.329
SD(Error) = 0.02797-2 = 0.167

The value for effect size, SD(Main), was estimated using the range and mean
pattern method using a range of means of 0.3 and the minimum SD mean pattern.

The results are displayed on the next page. The resulting power estimate of 0.148
is very low. Power for the subplot main effect, using the same values, is estimated
at 0.998 (report not shown). This illustrates what we know about the split-plot
design: The power of detecting subplot effects is greater than the power of
detecting main plot effects.

Chapter 16, Power Analysis 413


Power Analysis for Split-Plot Design

Study Power of F Test


Solve for Power

Given
Alpha 0.05
Main Plot Levels 4
Subplot Levels 4
Term of Interest Main
SD(Rep*Main) 0.329
SD(Error) 0.167
Range 0.3
Pattern Minimum SD
SD(Main) 0.1225
Replicates 5
Total Sample Size 80

Results
Power 0.148
Critical F Value 3.49
DF 3,12
NCP (λ) 1.954

As mentioned earlier, studying the power of multiple comparisons for the


interaction term in a split-plot design requires two components of error, σ2RA and
σ2. Statistix pools these errors using Satterthwaite’s approximation, and therefore
the results are approximate for means with different levels of the main plot factor.
See the strip-plot design discussed in the next section for an example report of
multiple comparisons for an interaction using pooled error.

Strip-Plot Design
This procedure performs power analysis for the balanced strip-plot design. You
can study the power of the F test and the power of multiple comparisons of means.

Specification First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. The dialog box on the next page shows an example of a
Report dialog.

Next make a selection for Study Power of: F test or Multiple Comparisons. Make
a selection for Solve for: Power, Effect Size, or Sample Size. Enter a value for
Alpha: the statistical significance criterion used in the test.

A strip-plot design has two factors, the horizontal and vertical factors. Enter the
number of levels for the horizontal factor in the Horizontal Levels box. Enter the
number of levels for the sub-plot factor in the Vertical Levels box.

414 Statistix 10 User’s Manual


There are three terms in the strip-plot design that you may want to estimate power
for: Horizontal, Vertical, and Horz(Vert. You can only analyze one term at a
time. Select a term from the Term of Interest drop-down list.

The strip-plot design has three error terms and three components of error.
Consider the table of expected mean squares for a strip-plot design with r
replicates, a levels for horizontal factor A, and b levels for vertical factor B.

Source DF Expected Mean Square


Replicate r-1 σ2 + bσ2 R A + aσ2 R B + abσ2 R
A a-1 σ2 + bσ 2 R A + rbΣα i2 /(a-1)
Error(Rep(A) (r-1)(a-1) σ2 + bσ 2 R A
B b-1 σ2 + aσ2 R B + raΣβ j2 /(b-1)
Error(Rep(B) (r-1)(b-1) σ2 + aσ 2 R B
A(B (a-1)(b-1) σ2 + rΣΣ(αβ) ij2 /(a-1)(b-1)
Error(Rep(A(B) a(b-1)(r-1) σ2
Total rab-1

The three components of error are denoted σ2RA, σ2RB, and σ2 in the table above.
Power analysis for term A requires an estimate of σ2RA. Analysis of term B
requires an estimate of σ2RB. Analysis of multiple comparisons for A(B require
estimates of both. Enter an estimate for σRA in the SD(Rep(Horz) box. Enter an
estimate for σRB in the SD(Rep(Vert) box. Enter an estimate of σ in the

Chapter 16, Power Analysis 415


SD(Error) box. If you have the results from a similar strip-plot design, you can
compute estimates using the mean squares from the AOV table:
SD(Rep(Horz) = [(MSRep(Horz - MSError) / b]-2
SD(Rep(Vert) = [(MSRep(Vert - MSError) / a]-2
SD(Error) = [MSError]-2
where a is the number of horizontal levels and b the number of vertical levels used
in the prior experiment.

When solving for power or sample size, you’ll need to specify an effect size. For
the F test, the effect size is SD(Effect): the standard deviation of the effect of
interest. The effect of interest may be a main effect or an interaction. Press the
Estimate button for help at arriving at a value. For multiple comparisons, the
effect size is Detectable Difference: the mean difference you want to be able to
detect using multiple comparisons. You’ll also select a MC Method: the multiple
comparison method you intend to use.

When solving for power or effect size, you’ll need to specify a value for Sample
Size. Enter a number of replicates (that is, a number of main plots). When solving
for effect size or sample size, you’ll need to specify a value for Power. Press OK
to display the report.

Example The dialog box on the preceding page is used to compute the power of detecting a
difference of 2000 using Tukey’s all-pairwise mean comparisons in a strip-plot
design with 6 horizontal plot levels, 3 vertical plot levels, and 5 replicates.
Estimates for SD(Rep(Horz), SD(Rep(Vert), and SD(Error) were calculated using
the mean squares from the rice yield example strip-plot on page 235.
SD(Rep(Horz) = [(1492262 - 411646) / 3]-2 = 600
SD(Rep(Vert) = [(743727 - 411646) / 6]-2 = 235
SD(Error) = [411646]-2 = 642

The results are displayed on the next page.

The results are divided into three parts. Tests for comparing two means that have
the same level of the horizontal factor uses a pooled error combining the error
components SD(Rep(Horz) and SD(Error). Tests for comparing two means that
have the same level of the vertical factor uses a pooled error combining the error
components SD(Rep(Vert) and SD(Error). Comparisons of two means that have
the different levels of both the horizontal and vertical factors uses a pooled error
combining the error components SD(Rep(Horz), SD(Rep(Vert), and SD(Error).

So the three types of comparisons have different standard errors and different
degrees of freedom. The pooled errors and degrees of freedom are estimated using
Satterthwaite’s approximation (Welch, 1947).

As a result of having different standard errors and degrees of freedom, the power
for the three groups of mean comparisons are different. The power of detecting a

416 Statistix 10 User’s Manual


difference of 2000 for two means with the same level of the horizontal factor is
0.812. The power for two means with the same level of the vertical factor is 0.467.
The power for two means with different levels of both factors is 0.427.

Power Analysis for Strip-Plot Design

Study Power of Multiple Comparisons


Solve for Power

Given
Alpha 0.05
Horizontal Levels 6
Vertical Levels 3
Term of Interest Horz*Vert
SD(Rep*Horizontal) 600
SD(Rep*Vertical) 235
SD(Error) 642
MC Method Tukey HSD
Detectable Difference 2000
Replicates 5
Total Sample Size 90

Results
Same Level of Horizontal Factor (results are approximate)
Power 0.812
SE(diff) 432.4
Critical T Value 3.70
DF 44.9
NCP (δ) 4.626

Same Level of Vertical Factor (results are approximate)


Power 0.467
SE(diff) 555.8
Critical T Value 3.71
DF 41.8
NCP (δ) 3.599

Different Levels of Horizontal and Vertical Factors


(results are approximate)
Power 0.427
SE(diff) 575.3
Critical T Value 3.70
DF 44.6
NCP (δ) 3.477

Split-Split-Plot Design
This procedure performs power analysis for the balanced split-split-plot design.
You can study the power of the F test and the power of multiple comparisons of
means.

Specification First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. The dialog box on the next page shows an example of a
Report dialog.

Chapter 16, Power Analysis 417


Next make a selection for Study Power of: F test or Multiple Comparisons. Make
a selection for Solve for: Power, Effect Size, or Sample Size. Enter a value for
Alpha: the statistical significance criterion used in the test.

A split-split-plot design has three factors, the main plot factor, the subplot factor,
and the sub-subplot factor. Enter the number of levels for the main plot factor in
the Main Levels box. Enter the number of levels for the subplot factor in the
Subplot Levels box. Enter the number of levels for the sub-subplot factor in the
Sub-subplot Levels box.

There are seven terms in the split-split-plot design that you may want to estimate
power for: Main, Subplot, Sub-subplot, Main(Subplot, Main(Sub-subplot,
Subplot(Sub-subplot, and Main(Subplot(Sub-subplot. You can only analyze one
term at a time. Select a term from the Term of Interest drop-down list.

The split-split-plot design has three error terms and three components of error.
Consider the table of expected mean squares for a balanced split-split-plot design
with r replicates, a levels for main plot factor A, b levels for subplot factor B, and
c levels for sub-subplot factor C.

418 Statistix 10 User’s Manual


Source DF Expected Mean Square
Replicate r-1 σ2 + cσ2RAB + bcσ2RA + abcσ2R
A a-1 σ2 + cσ2RAB + bcσ2RA + rbcΣαi2/(a-1)
Error(Rep(A) (r-1)(a-1) σ2 + cσ2RAB + bcσ2RA
B b-1 σ2 + cσ2RAB + racΣβj2/(b-1)
A(B (a-1)(b-1) σ2 + cσ2 RAB + rcΣΣ(αβ)ij2/(a-1)(b-1)
Error(Rep(A(B) a(r-1)(br-1) σ2 + cσ2RAB
C c-1 σ2 + rabΣ&k2/(c-1)
A(C (a-1)(c-1) σ2 + rbΣΣ(α&)ik2/(a-1)(c-1)
B(C (b-1)(c-1) σ2 + raΣΣ(β&)ik2/(b-1)(c-1)
A(B(C (a-1)(b-1)(c-1) σ2 + rΣΣΣ(αβ&)ijk2/(a-1)(b-1)(c-1)
Error(Rep(A(B(C) ab(r-1)(c-1) σ2
Total rabc-1

The three components of error are denoted σ2RA, σ2RAB, and σ2 in the table above.
Power analysis for term A requires estimates of all three components of error: σ2RA,
σ2RAB, and σ2. Analysis of terms B and A(B requires estimates of σ2 RAB and σ2.
Analysis of the remaining terms require only an estimate of σ2. Analysis of
multiple comparisons for B(C require estimates of σ2RAB and σ2. Analysis of
multiple comparisons for the remaining interaction terms require estimates of all
three error components. Enter an estimate for σRA in the SD(Rep(Main) box.
Enter an estimate for σRAB in the SD(Rep(Main(Subplot) box. Enter an estimate
of σ in the SD(Error) box. Once you select a term of interest, only the required
boxes are enabled, so it will be clear which error components are needed.

If you have the results from a similar split-split-plot design, you can compute
estimates using the mean squares from the AOV table:
SD(Rep(Main) = [(MSRep(Main - MSRep*Main(Subplot) / bc]-2
SD(Rep(Main(Subplot) = [(MS Rep(Main(Subplot - MSError) / c]-2
SD(Error) = [MSError]-2
where b is the number of subplot levels and c the number of sub-subplot levels
used in the prior experiment.

When solving for power or sample size, you’ll need to specify an effect size. For
the F test, the effect size is SD(Effect): the standard deviation of the effect of
interest. The effect of interest may be a main effect or an interaction. Press the
Estimate button for help at arriving at a value. For multiple comparisons, the
effect size is Detectable Difference: the mean difference you want to be able to
detect using multiple comparisons. You’ll also select a MC Method: the multiple
comparison method you intend to use.

When solving for power or effect size, you’ll need to specify a value for sample
size. Enter a number of replicates (that is, a number of main plots). When solving
for effect size or sample size, you’ll need to specify a value for power. Press OK
to display the report.

Chapter 16, Power Analysis 419


Study Power of Multiple Comparisons
Solve for Sample Size

Given
Alpha 0.05
Main Plot Levels 4
Subplot Levels 3
Sub-Subplot Levels 2
Term of Interest Sub-Subplot
SD(Error) 1.0
MC Method Tukey HSD
Detectable Difference 0.8
Power 0.80

Results
Replicates 3
Total Sample Size 72
Actual Power 0.903
SE(diff) 0.236
Critical T Value 2.06
DF 24
NCP (δ) 3.394

Repeated Measures, One-Way


This procedure performs power analysis for the balanced one-way repeated
measures design. You can study the power of the F test and the power of multiple
comparisons of means.

Specification

420 Statistix 10 User’s Manual


First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. The dialog box on the preceding page shows an example
of a Report dialog.

Next make a selection for Study Power of: F test or Multiple Comparisons. Make
a selection for Solve for: Power, Effect Size, or Sample Size. Enter a value for
Alpha: the statistical significance criterion used in the test.

A one-way repeated measures design has one within-subjects factor. Enter the
number of levels for the within-subjects factor in the Within-Subjects Levels box.

Enter a value for SD(Error): the standard deviation due to random error among
observations within the same treatment for a single subject. If you have the results
from a similar repeated measures AOV, you can use the square root of the error
mean square as an estimate.

The F test in a repeated measures design can be corrected for nonsphericity.


Sphericity is an important assumption of a repeated measures AOV. It refers to the
condition where the variances of the differences between all possible pairs of the
within-subject factor are equal. The F test can be corrected by reducing the
degrees of freedom based on a statistic called epsilon (ε), which is a measure of the
departure from sphericity. A value of 1 for epsilon indicates sphericity. The
smaller epsilon, the greater the departure from sphericity. Enter a value for epsilon
in the Nonsphericity (ε) box. Estimates for epsilon are included in Statistix reports
for repeated measures designs.

When solving for power or sample size, you’ll need to specify an effect size. For
the F test, the effect size is SD(Effect): the standard deviation of the within-
subjects main effect. Press the Estimate button for help at arriving at a value. For
multiple comparisons, the effect size is Detectable Difference: the mean difference
you want to be able to detect using multiple comparisons. You’ll also select a MC
Method: the multiple comparison method you intend to use.

When solving for power or effect size, you’ll need to specify a value for sample
size. Enter a number of subjects in the Subjects box. When solving for effect size
or sample size, you’ll need to specify a value for Power. Press OK to display the
report.

Example The dialog box on the preceding page is used to find the sample required to obtain
power of 0.80 of the F test in a repeated measures design with 3 levels of the
within-subject factor, SD(Error) or 9.6, and ε of 0.7785. The effect size
SD(Within) value of 10.0 was calculated using the Estimate dialog box using the
Range and Mean Pattern method. A value of 20 was entered for range and the
minimum SD pattern selected.

Chapter 16, Power Analysis 421


Study Power of F Test
Solve for Sample Size

Given
Alpha 0.05
Within-Subject Levels 3
SD(Error) 9.6
Nonsphericity (ε) 0.7785
Range 20
Pattern Minimum SD
SD(Within) 10
Power 0.80

Results
Subjects 8
Total Observations 24
Actual Power 0.862
Critical F Value 4.28
DF 1.6,10.9
NCP (λ) 13.52

Results are approximate for ε < 1.0.

The report shows that 8 subjects are required to obtain power of 0.80 of detecting
an effect when the true SD(Within) is 10. Note that the degrees of freedom aren’t
whole numbers. They’ve been corrected by multiplying the degrees of freedom of
the uncorrected F test (2 and 14) by ε (0.7785).

Repeated Measures, Two-Way


This procedure performs power analysis for the balanced two-way repeated
measures design with one between-subjects factor and one within-subjects factor.
You can study the power of the F test and the power of multiple comparisons of
means.

Specification First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. The dialog box on the next page shows an example of a
Report dialog.

Next make a selection for Study Power of: F test or Multiple Comparisons. Make
a selection for Solve for: Power, Effect Size, or Sample Size. Enter a value for
Alpha: the statistical significance criterion used in the test.

This two-way repeated measures design has one between-subjects factor and one
within-subjects factor. Enter the number of levels for the between-subjects factor
in the Between-Subjects Levels box. Enter the number of levels for the within-
subjects factor in the Within-Subjects Levels box.

422 Statistix 10 User’s Manual


There are three terms in the two-way repeated measures design that you may want
to estimate power for: Between, Within, and Between(Within. You can only
analyze one term at a time. Select a term from the Term of Interest drop-down
list.

The two-factor repeated measures design has two error terms and two components
of error. Consider the table of expected mean squares for a balanced repeated
measures design with s subjects, a levels for between-subjects factor A, and b
levels for within-subjects factor B.

Source DF Expected Mean Square


A a-1 σ2 + bσ2AS + sbΣαi2/(a-1)
Error(A(Subjects) a(s-1) σ2 + bσ2 AS
B b-1 σ2 + asΣβj2/(b-1)
A(B (a-1)(b-1) σ2 + sΣΣ(αβ)ij2/(a-1)(b-1)
Error(A(Subjects(B) a(s-1)(b-1) σ2
Total asb-1

The two components of error are denoted σ2AS and σ2 in the table above. Since σ2AS
appears in the expected mean square for the between-subjects factor A, you’ll need
to provide an estimate for it for evaluate power for the between-subjects main

Chapter 16, Power Analysis 423


effect. It’s also required for multiple comparisons for the interaction term because
it’s a component of the standard error for the difference of means at different levels
of the between-subjects factor. Enter an estimate for σAS in the
SD(Between(Subject) box and an estimate of σ in the SD(Error) box. If you have
the results from a repeated measures design, you can compute an estimate for σAS
using the mean squares from the AOV table:
SD(Between(Subject) = [(MSBetween(Subject - MSError) / b]-2 where b is the
number of within-subjects levels used in the prior experiment (not in the proposed
experiment). You can use the square root of MSError from an earlier experiment as
an estimate for SD(Error).

The F tests for the Between and Within(Between terms in a repeated measures
design can be corrected for nonsphericity. Sphericity is an important assumption
of a repeated measures AOV. It refers to the condition where the variances of the
differences between all possible pairs of the within-subject factor are equal. The F
test can be corrected by reducing the degrees of freedom based on a statistic called
epsilon (ε), which is a measure of the departure from sphericity. A value of 1 for
epsilon indicates sphericity. The smaller epsilon, the greater the departure from
sphericity. Enter a value for epsilon in the Nonsphericity (ε) box. Estimates for
epsilon are included in Statistix reports for repeated measures designs.

When solving for power or sample size, you’ll need to specify an effect size. For
the F test, the effect size is SD(Effect): the standard deviation of the within-
subjects main effect. Press the Estimate button for help at arriving at a value. For
multiple comparisons, the effect size is Detectable Difference: the mean difference
you want to be able to detect using multiple comparisons. You’ll also select a MC
Method: the multiple comparison method you intend to use.

When solving for power or effect size, you’ll need to specify a value for sample
size. Enter a number of subjects in the Subjects box. When solving for effect size
or sample size, you’ll need to specify a value for Power. Press OK to display the
report.

Example The dialog box on the preceding page is used to compute the power of detecting a
difference of 3 for the between-subjects main effect using Dunnett’s mean
comparisons with a control in a repeated measures design with 3 between-subjects
factor levels, 6 within-subjects factor levels, and 10 subjects. The results are
displayed on the next page.

The resulting power is 0.824. The standard error of the difference of two means,
based on the estimates of the components of error provided, is 0.913. The critical t
value for Dunnett’s test is 2.33 with 27 degrees of freedom. The noncentrality
parameter δ associated with a detectable difference of 3.0 is 3.286.

424 Statistix 10 User’s Manual


Power Analysis for Repeated Measures Two-Factor Design

Study Power of Multiple Comparisons


Solve for Power

Given
Alpha 0.05
Between-Subjects Levels 3
Within-Subjects Levels 6
Term of Interest Between
SD(Between*Subject) 2.0
SD(Error) 1.0
MC Method Dunnett
Detectable Difference 3.0
Subjects/Between 10
Total Subjects 30
Total Observations 180

Results
Power 0.824
SE(diff) 0.913
Critical T Value 2.33
DF 27
NCP (δ) 3.286

One Proportion
This procedure performs power analysis for the one proportion test. The one
proportion test is used to test the hypothesis that a proportion is equal to a
particular value.

Specification

Chapter 16, Power Analysis 425


First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. The dialog above shows an example of a Report dialog.
The Row Variable and Column Variable options are disabled because they’re used
for the Table dialog, not the Report dialog. Next make a selection for Solve for:
Power or Sample Size. The dialog above shows an example of solving for sample
size, so the box used to enter a value for sample size is disabled because it won’t
be used.

There are two methods available for performing power analysis: the exact method
based on the binomial distribution, or the normal approximation method based on
large sample theory. Make your selection from the Method drop-down list.

Enter a value for Alpha: the statistical significance criterion used in the test.

Enter a value for Null Hypothesis (p0): the population proportion under the null
hypothesis. This is the proportion that exists in the general population that the
proportion estimated from the data will be tested against.

Enter a value for the Actual Value (p): the actual proportion of the population that
the sample will be drawn. It is the effect size, that is, the proportion that if true,
you’d like to be able to determine as different from the null hypothesis.

Next, select the Alternative Hypothesis you intend to use when analyzing the data
for the experiment: the two-sided alternative p … p0, the one-sided alterative p <
p0, or the one-sided alternative p > p0.

When solving for power, enter a value for Sample Size. When solving for sample
size, enter a value for Power. Press OK to display the report.

Example Suppose we’re interested in testing the hypothesis that breast cancer in women
with a sister history of breast cancer are at a higher risk than the general population
(Rosner, 2000). Let’s assume that the prevalence rate of breast cancer is 2%
among women aged 50-54, and the rate is 5% among woman aged 50-54 with a
sister history of breast cancer. The dialog box on the preceding page is used to
determine the sample size of women with a sister history required to achieve
power of 0.90 of detecting a difference using a two-sided test with alpha = 0.05.
The results are displayed on the next page.

The report gives two answers for sample size: 377 and 400. The power function
for the exact test for the test of one proportion has a well described saw-toothed
shape rather than a monotontonically increasing function that we’d expect
intuitively (Chernick and Liu, 2002). For some values of sample sizes, increasing
sample size by 1 decreases power rather than increasing it. This occurs because
the target value for the significance level can’t be obtained exactly for most sample
sizes.

426 Statistix 10 User’s Manual


Power Analysis for One Proportion

Solve for Sample Size

Given
Method Exact
Alpha 0.05
Null Hypothesis (p0) 0.02
Actual Value (p) 0.05
Alternative Hyp p … p0
Power 0.90

Results
Sample Size (1) 377
Actual Power 0.902
Sample Size (2) 400
Actual Power 0.901

Sample Size (1) in the report is the lowest sample size that achieves the desired
power. Sample Size (2) is the lowest sample size that achieves the desired power,
and also satisfies the condition that all larger sample sizes achieve the desired
power. We suggest that you use the higher number labeled sample size (2).

When computing a table or plot of sample sizes for a range of values of a


parameter other than sample size, only sample size (2) is reported. For example,
consider the table below of sample sizes determined for a range of effect sizes.

Power Analysis for One Proportion

Solve for Sample Size

Given
Method Exact
Alpha 0.05
Null Hypothesis (p0) 0.02
Alternative Hyp p … p0
Power 0.90

Actual Value Sample


(p) Size
0.04 814
0.05 400
0.06 255
0.07 184
0.08 145
0.09 115

Note that for p = 0.05, the sample size reported is 400, the same value given for
sample size (2) in the report on the preceding page.

Computational See Rosner (2000) for a description of the normal approximation. Values for the
Notes binomial distribution are computed using DiDinato and Morris (1992).

Chapter 16, Power Analysis 427


Two Proportions
This procedure performs power analysis for the test of two proportions. The two
proportions test is used to compare proportions in two independent samples. The
null hypothesis tested is that the two proportions are the same.

Specification First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. The dialog on the next page shows an example of a
Report dialog. The Row Variable and Column Variable options are disabled
because they’re used for the Table dialog, not the Report dialog.

Next make a selection for Solve for: Power or Sample Size. The dialog above
shows an example of solving for sample size, so the box used to enter a value for
sample size is disabled because it won’t be used.

There are two methods available for performing power analysis: the exact method
based on Fisher’s exact test (Casagrande et al., 1978a), or the normal
approximation with continuity correction (Casagrande, et al., 1978b). Make your
selection from the Method drop-down list.

Enter a value for Alpha: the statistical significance criterion used in the test.

Enter values for the proportions for the two populations from which samples will
be drawn in the boxes for p1 and p2. Power will be computed assuming that the
values entered are the true population proportions.
Next, select the Alternative Hypothesis you intend to use when analyzing the data

428 Statistix 10 User’s Manual


for the experiment: the two-sided alternative p1 … p1, the one-sided alterative p1 <
p2, or the one-sided alternative p1 > p2.

When solving for power, enter values for sample sizes for the two samples in the
boxes Sample Size 1 and Sample Size 2. When solving for sample size, enter a
value for Power. Press OK to display the report.

Example Suppose we’re interested in comparing two surgical techniques where the success
rates are assumed to be 50% and 70% respectively. The dialog box on the
preceding page is used to determine the sample sizes required to reach a power of
0.80 of detecting a difference in success rates using a two-sided test with alpha =
0.05. The results are displayed below.

Power Analysis for Two Proportions

Solve for Sample Size

Given
Method Exact
Alpha 0.05
p1 0.50
p2 0.70
Alternative Hyp p1 … p2
Power 0.80

Results
Sample Size 1 102
Sample Size 2 102
Actual Power 0.806

The report shows that 102 subjects in each group are required for a total sample
size of 204. The actual power achieved with this sample size is 0.806.

One Correlation
This procedure performs power analysis for the one correlation test. The one
correlation test is used to test the hypothesis that a correlation is equal to a
particular value.

Specification First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. Next make a selection for Solve for: Power or Sample
Size. Enter a value for Alpha: the statistical significance criterion used in the test.

Enter a value for Null Hypothesis (ρ0): the population proportion under the null
hypothesis. This is the proportion that exists in the general population that the
proportion estimated from the data will be tested against.

Chapter 16, Power Analysis 429


Enter a value for the Actual Value (ρ): the actual proportion of the population that
the sample will be drawn. It is the effect size, that is, the proportion that if true,
you’d like to be able to determine as different from the null hypothesis.

Next, select the Alternative Hypothesis you intend to use when analyzing the data
for the experiment: the two-sided alternative ρ … ρ0, the one-sided alterative p <
p0, or the one-sided alternative ρ > ρ0. When solving for power, enter a value for
Sample Size. When solving for sample size, enter a value for Power. Press OK to
display the report.

Example The dialog box above is used to determine the sample size required to achieve
power of 0.80 of rejecting the null hypothesis that the correlation coefficient equals
0.0 when the actual population coefficient is 0.2 using a two-sided test with
alpha = 0.05. The results are displayed below.

Power Analysis for One Correlation

Solve for Sample Size

Given
Alpha 0.05
Null Hypothesis (ρ0) 0.00
Actual Value (ρ) 0.20
Alternative Hyp ρ … ρ0
Power 0.80

Results
Sample Size 193
Actual Power 0.800

430 Statistix 10 User’s Manual


Computational The method used to compute power using the correlation coefficient distribution is
Notes described in Zar (1999). Exact probabilities for the correlation distribution are
computed using Guenther (1977) for small samples (N # 1000) and approximate
probabilities using Kraemer (1973) for large samples. Results are exact when ρ0
equals zero for any sample size.

Two Correlations
This procedure performs power analysis for the test of two correlation coefficients.
The two correlations test is used to compare correlations in two independent
samples. The null hypothesis tested is that the two correlations are equal. The test
is equivalent to testing that two simple regression slopes are the same.

Specification

First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. Next make a selection for Solve for: Power or Sample
Size. Enter a value for Alpha: the statistical significance criterion used in the test.

Enter hypothesized values for the two population correlations, ρ1 and ρ2. Select
the Alternative Hypothesis you intend to use when analyzing the data for the
experiment: the two-sided alternative ρ1 … ρ2, the one-sided alterative ρ1 < ρ2, or
the one-sided alternative ρ1 > ρ2.

When solving for power, enter values for Sample Size 1 and Sample Size 2. When
solving for sample size, enter a value for Power. Press OK to display the report.

Chapter 16, Power Analysis 431


Example The dialog box on the preceding page is used to determine the power of detecting a
difference between the correlations of two samples drawn from populations where
the actual correlations are 0.2 and 0.4 with sample sizes of 200 and 100
respectively using a two-sided test with alpha = 0.05. The results are displayed
below.

Power Analysis for Two Correlations

Solve for Power

Given
Alpha 0.05
ρ1 0.20
ρ2 0.40
Alternative Hyp ρ1 <> ρ2
Sample Size 1 200
Sample Size 2 100

Results
Power 0.429

The resulting power is 0.429.

Computational Power and sample sizes are computed using the normal approximation (Zar, 1999).
Notes

Linear Regression (β)


This procedure performs power analysis for simple or multiple linear regression. It
determines the power of the t test used to test whether or not a regression
coefficient (β) is different from zero. This version of power analysis for linear
regression has the advantage of expressing the effect size in the units of the
dependent and independent variables, making it easy to establish the magnitude of
effect sizes that are clinically important.

The next section, Linear Regression (R2), uses R2 for effect size. R2 is unitless, so
it’s an abstract measure of effect size. But it has the advantage of letting you
analyze the power of groups of independent in a regression model.

Specification First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. Next make a selection for Solve for: Power, Sample
Size, or Effect Size. Make a selection for the Alternative Hypothesis: either one-
sided test or two-sided test. The two-sided test is commonly used in regression.
Enter a value for Alpha: the statistical significance criterion used in the test.

432 Statistix 10 User’s Manual


Enter a value for the Number of Predictors, that is, the number of independent
variables in the model. You can only analyze one predictor at a time, which is
referred to here as Xj. Enter the standard deviation for the predictor of interest in
the SD(Xj) box. This is the ordinary SD of a variable like those found in the
Descriptive Statistics report, not the standard error of a regression coefficient.

When analyzing power for multiple regression, enter a value for VIF(Xj), the
variance inflation factor for the predictor Xj. VIF is a measure of collinearity
between Xj and the remaining independent variables in the model. A value of 1.0
indicates no collinearity resulting in no inflation of the variance of the regression
coefficient. The larger a VIF, the greater the collinearity, and the greater the
inflation of the variance, which has a negative effect on power. You can find
values for VIF in the coefficient table of linear regression.

Enter a value in the SD(Error) box: The square root of the mean square error for
the regression model, which is part of the coefficient table report for Linear
Regression. A pilot study will likely be needed to obtain an estimate.

When solving for power or sample size, enter the effect size that you’d like to be
able to detect in the β(Xj) box. When solving for power or effect size, enter a
value for Sample Size, the number of cases. When solving for sample size or
effect size, enter a value for Power. Press OK to display the report.

Example The dialog box above is used to determine the sample size required to achieve
power of 0.80 of detecting a regression coefficient significantly different from zero

Chapter 16, Power Analysis 433


when the actual coefficient is 1.0, given a model with two predictors, the SD for
the predictor of interest is 6, the VIF for the predictor of interest is 1.5, and the SD
for error equals 12. The results are displayed below.

Power Analysis for Linear Regression (β)

Solve for Sample Size

Given
Two-tailed Test
Alpha 0.05
Number of Predictors 2
SD(Xj) 6.0
VIF(Xj) 1.5
SD(Error) 12
β(Xj) 1.0
Power 0.80

Results
Sample Size 50
Actual Power 0.807
Critical T Value 2.01
DF 47
NCP (δ) 2.887

A sample size of 50 cases is needed to obtain the desired power of 0.80. The
resulting actual power is 0.807.

Computational The noncentrality parameter δ for the t statistic used to test a regression coefficient
Notes is δ = β / SE(β) (Seber, 1977). In simple regression, SE(β) = σ/(n SD(X)). In
multiple regression, SE(βj) = σ/(n SD(Xj) VIFj). Power is calculated using the
probability function for the t distribution using the critical t value and degrees of
freedom that would be used for the t test, and δ: Power = T(-tcritical, df, δ) +
(1 - T(tcritical, df, δ)).

Linear Regression (R2)


This procedure performs power analysis for a hierarchical linear regression
analysis. It determines the power of the F test used to test whether or not adding a
group of predictor variables improves a regression model. The F test compares
two models: The null model and the alternative model, where the null model
contains a subset of the predictor variables in the alternative model. The effect
size is expressed as change in R2.

The version of power analysis for linear regression discussed in the preceding
section is generally more useful because the effect size is expressed in the units of
the regression coefficient, making it easier to establish the magnitude of effect
sizes that are clinically important. Use this procedure when you want to analyze

434 Statistix 10 User’s Manual


the power of groups of independent in a regression model.

Specification

First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. Next make a selection for Solve for: Power, Sample
Size, or Effect Size. Enter a value for Alpha: the statistical significance criterion
used in the test.

Enter a value for the number of predictors in the null model in the Predictors
(Null Model) box. You may enter 0 to indicate a constant only model. Enter a
value for R2 (Null Model): R2 for the subset model. R2 for a constant only model
is zero.

Enter a value for Predictors Added: the number of predictors added to the null
model to make the alternative model.

When solving for power or sample size, enter the effect size that you’d like to be
able to detect in the R2 Added box. It’s the increase in R2 when the predictors are
add to the null model. When solving for power or effect size, enter a value for
Sample Size, the number of cases. When solving for sample size or effect size,
enter a value for Power. Press OK to display the report.

Example The example is from Cohen (1988). Suppose a researcher is working with a
predicative model for a dependent variable and five predictor variables, three of
which are easy to obtain, and two that are expensive to obtain. Before collecting
the data on 95 subjects, she’s considering excluding the two expensive variables,

Chapter 16, Power Analysis 435


but not if the two variables together add as much as 0.04 in R2 to the model
containing the other three variables, assuming R2 for the full model is 0.10. The
analysis is specified in the dialog box on the preceding page. The results are
displayed below.

Power Analysis for Linear Regression (R²)

Solve for Power

Given
Alpha 0.05
Predictors (Null Model) 3
R² (Null Model) 0.06
Predictors Added 2
R² Added 0.04
Sample Size 95

Results
Power 0.423
Critical F Value 3.10
DF 2,89
NCP (λ) 4.222

The probability of rejecting the null hypothesis is 0.423. The Type II error risk is
1 - 0.423 = 0.577, too high to use the F test as the basis for excluding the two
variables and risk losing as much as 4% of the variance of the dependent variable.

Computational The method is described in Cohen (1988). The noncentrality parameter for the F
Notes test is λ = n (R2 alt - R2null) / (1 - R2alt) (Koele, 1982).

Logistic Regression
This procedure performs power analysis for simple and multiple logistic
regression. It determines the power of the test used to test whether or not a
regression coefficient is different from zero. The analysis can handle either
continuous or binary independent variables. For continuous variables, the effect
size can be expressed in terms of the regression coefficient (β) or odds ratio (eβ).
For binary variables, we define P1 as the event rate at X = 0, and P2 as the event
rate at X = 1. The effect size for binary variables can be expressed as the event
rate for X=1 (P2), or the odds ratio calculated as [P2/(1-P2)]/[P1/(1-P1)].

Specification First select Report, Table, or Plot by clicking on the corresponding tab located at
the top of the dialog box. The dialog box on the next page is an example of a
Report dialog. Next make a selection for Solve for: Power, Sample Size, or Effect
Size.

436 Statistix 10 User’s Manual


This procedure can be used to analyze the power of multiple logistic regression,
but you can only analyze one independent variable at a time, which is referred to
here as Xj. Select the distribution of the independent variable of interest from the
Xj Distribution list: Continuous or Binary. Continuous variables are assumed to
be normally distributed.

Make a selection for the Alternative Hypothesis: either one-sided test or two-sided
test. Enter a value for Alpha: the statistical significance criterion used in the test.

For multiple logistic regression, enter a value for VIF(Xj): The variation inflation
factor. VIF is a measure of collinearity between Xj and the remaining independent
variables in the model. A value of 1.0 indicates no collinearity resulting in no
inflation of the variance of the regression coefficient. The larger a VIF, the greater
the collinearity, and the greater the inflation of the variance, which has a negative
effect on power. Statistix doesn’t report VIF for logistic regression, but you can
calculate it using the formula VIF = 1 / (1 - ρ2j), where ρ2j is the square of the
multiple partial correlation coefficient (also called R2) which measures the linear
association between Xj and the remaining independent variables in the model.
You can estimate ρ2j using the value reported for R2 that is obtained when Xj is
regressed on the other independent variables in the model. Enter a value of 1.0
when there’s only one independent variable.

Enter a value for the Baseline Event Rate. For a normally distributed continuous

Chapter 16, Power Analysis 437


independent variable, it’s proportion of cases where Y = 1 when Xj equals then
mean of Xj (equivalent to ignoring Xj). For a binary independent variable, it’s the
event rate at Xj = 0 (the proportion of cases where Y = 1 when Xj = 0). The
baseline event rate is abbreviated as P1 on plots.

For a continuous independent variable, enter a value for SD(Xj): the standard
deviation of the independent variable of interest. For a binary independent
variable, enter a value for Proportion Xj = 1 for the proposed design. Balanced
designs where this value is 0.5 achieves the greatest power.

Make a selection for Effect Size Units. For continuous variables, the effect size
can be expressed in terms of the regression β(Xj), or the Odds Ratio. The
regression coefficient β(Xj) is the change in log odds for an increase of one unit in
Xj. For binary variables, the effect size can be expressed as the Event Rate at X=1
(P2), or the Odds Ratio. When solving for effect size, the report displays both
values.

When solving for power or sample size, enter the effect size that you’d like to be
able to detect in units selected as described above, β(Xj) or the Odds Ratio for
continuous variables; Event Rate at Xj = 1 (P2) or the Odds Ratio for binary
variables.

When solving for power or effect size, enter a value for Sample Size, the number
of cases. When solving for sample size or effect size, enter a value for Power.
Press OK to display the report.

Example I The example is from page 1628 of Hsieh et al. (1998). The dialog box on the
Continuous preceding page is used to compute the sample size needed to achieve power of
Covariate 0.95 for a logistic regression with one continuous independent variable to detect an
effect size of β/SD(X) = 0.405 (equivalent to an odds ratio of e0.405 = 1.5) assuming
a baseline event rate of 0.1. The results are displayed below.

Power Analysis for Logistic Regression

Solve for Sample Size

Given
Continuous/Normal Xj
Two-tailed Test
Alpha 0.05
VIF(Xj) 1.0
Baseline Event Rate 0.10
SD(Xj) 1.0
β(Xj) 0.405
Power 0.95

Results
Sample Size 881
Actual Power 0.950

438 Statistix 10 User’s Manual


The example analysis was posed in terms of a change in one standard deviation of
the independent variable, so we enter 1.0 in the box for SD(Xj). A two-sided test
with a significance level of 0.05 is specified.

Example II This example is from page 1626 of Hsieh et al. (1998). The dialog box displayed
Binary on the next page is used to compute the sample size needed to achieve power of
Covariate 0.95 for a logistic regression with one binary independent variable to detect an
odds ratio of 4.0 (equivalent to specifying P2 = 0.5) assuming a baseline event rate
(P1) of 0.2 and a balanced design (that is, proportion of Xj=1 is 0.5). A two-sided
test with a significance level of 0.05 is specified.

The results are displayed on the next page.

Computational The methods are described in Hsieh et al. (1998). Several alternative formulae are
Notes discussed. Statistix uses formulae 1 and 2 based on normal approximations.

Chapter 16, Power Analysis 439


Power Analysis for Logistic Regression

Solve for Sample Size

Given
Binary Xj
Two-tailed Test
Alpha 0.05
VIF(Xj) 1.0
Baseline Event Rate (P1) 0.20
Proportion Xj=1 0.50
Odds Ratio 4.0
Power 0.95

Results
Sample Size 126
Actual Power 0.951

440 Statistix 10 User’s Manual


C H A P T E R

17
Probability Functions
Statistix offers a number of useful procedures to calculate the probabilities for
various probability distributions and the inverse functions for the standard normal,
the t-distribution, and the F-distribution.

The function names and arguments are displayed at the left of the dialog box,
which appears when Probability Functions is selected from the Statistics menu.

First select the probability function or inverse function you want to use. Once
you’ve selected a function, the prompts for the arguments appear next to the four
edit controls. Enter numbers for each of the arguments displayed. In the list of
function names, the position of the random variable is usually represented as an x,
although there are some minor variations on this which are described below.
Parameters are indicated in the dialog box with logical abbreviations.

Chapter 17, Probability Functions 441


After you’ve entered all of the values, press the Go button and the results will be
computed and displayed in the Results list box.

Press the Print All button to print the results. The entire contents of the Results
list box are printed, then the Results box is emptied. Press the Close button to exit
the Probability Functions procedure.

The functions are described in detail below. We use standard notation to describe
the region of the distribution for which the probability is being calculated. For
example, the expression “Pr(y # X)” represents the probability of a value of a
random variable y equal to or less than some specified value X. Both the lower tail
and upper tail probabilities are reported for cumulative distributions, and two-
tailed probabilities are reported for the Correlation, Z, and T functions. The
probability of an exact outcome is reported for the discrete functions (e.g.,
Binomial). The inverse functions compute the test statistic for a lower-tail
probability.

BETA (X, A, B)
Beta Probability Distribution

This function computes Pr(y # X) and Pr(y > X) for a beta random variable y with
parameters A and B. The beta distribution is very flexible. The parameters A and
B control the shape of the distribution, the mean of the distribution is given by A /
(A + B). The beta distribution is sometimes used itself for tests and several other
important distributions are easily derived from it, such as the t- and F-distributions
(Kennedy and Gentle 1980).

The values permitted for X range from 0 to 1. A and B must be positive, but they
don’t need to be integer values.

BINOMIAL (X, N, P)
Binomial Probability Distribution

This function computes Pr(y # X), Pr(y > X), and Pr(y = X) for a random variable
y from a binomial distribution with N trials and parameter P. In many situations,
the random variable is thought of as the number of “successes” out of N
independent trials (i.e., there were N-X “failures”). The parameter P is the
probability of a success on a particular trial. In other words, Binomial computes
the probability of observing X or fewer successes out of N trials when the success
rate per trial is P.

X and N must be specified as integers. The parameter P must be between zero and
one.

442 Statistix 10 User’s Manual


CHI-SQUARE (X, DF, λ)
Chi-Square Probability Distribution

This function computes Pr(y # X) and Pr(y > X) for a random variable y from a
chi-square distribution with DF degrees of freedom and noncentrality parameter λ.

X and DF must be a positive numbers. The default value for the noncentrality
parameter λ is zero, which is used to compute a probability for the central chi-
square distribution. Positive values for λ are used to compute probabilities for
noncentral chi-square distributions. The typical chi-square tests in goodness-of-fit
analyses use the upper-tail probabilities from the central distribution.

CHI-SQ INVERSE (P, DF)


Inverse of the Chi-Square Distribution

This function computes the chi-square test statistic from the central chi-square
distribution with DF degrees of freedom for which the probability of a smaller
value is P.

CORRELATION (X, ρ, N)
Correlation Coefficient Distribution

This function computes Pr(y # X), Pr(y > X), and Pr(|y| $ |X|) for a random
variable y, where y is a simple (Pearson) correlation coefficient computed from N
pairs of data drawn from a population where the true correlation is ρ.

The values permitted for X and ρ range between -1 and 1. The number of pairs N
must be a positive integer greater than two.

F (X, DF1, DF2, λ)


F Probability Distribution

This function computes Pr(y # X), Pr(y > X), and Pr(|y| $ |X|) for a random
variable y from a F-distribution with DF1 numerator degrees of freedom, DF2
denominator degrees of freedom, and noncentrality parameter λ.

X must be positive. Both degrees of freedom must be greater or equal to 1.0. The
default value for the noncentrality parameter λ is zero, which is used to compute a
probability for the central F distribution. Positive values for λ are used to compute
probabilities for noncentral F distributions. The typical F tests in regression and
analysis of variance use the upper-tail probabilities of the central distribution.

Chapter 17, Probability Functions 443


F INVERSE (P, DF1, DF2)
Inverse of the F-Distribution

This function computes the F test statistic from the central F distribution with DF1
numerator degrees of freedom and DF2 denominator degrees of freedom for which
the probability of a smaller value is P.

HYPERGEOMETRIC (X1, X2, N1, N2)


Hypergeometric Probability Distribution

This function computes Pr(y1 # X1), Pr(y1 > X1), and Pr(y1 = X1) , where y1 is a
random variable drawn from a hypergeometric distribution. Using the traditional
“urn” model, N1 corresponds to the number of red balls initially in the urn, and N2
corresponds to the number of black balls. Then, y1+y2 balls are randomly drawn
from the urn. Hypergeometric computes the probability of observing X1 or fewer
red balls in such a sample.

All four parameters must be nonnegative integers.

NEG-BINOMIAL (N+X, N, P)
Negative Binomial Probability Distribution

This function computes Pr(y # X), Pr(y > X), and Pr(y = X) for a random variable
y which follows a negative binomial distribution. Typically, N+X is referred to as
the number of trials required to get N successes. The probability of a success on a
particular trial is the parameter p. In other words, Neg-Binomial computes the
probability of requiring N+X or fewer trials to get N successes. X is the number of
failures observed before the Nth success.

N+X and N must be positive integers, and N+X must be greater than N. The
parameter p may range from zero to one.

POISSON (X, λ)
Poisson Probability Distribution

This function computes Pr(y # X), Pr(y > X), and Pr(y = X) for a random variable
y from a Poisson distribution with rate parameter λ. In some situations, the
random variable is thought of as the number of random events in some interval of
time or space, and the rate parameter λ is the average number of such events
expected in the interval. In other words, Poisson computes the probability of
observing X or fewer events in an interval if the expected number of events is λ.

X must be an integer value and λ must be greater than zero.

444 Statistix 10 User’s Manual


T (X, DF, δ)
Student’s T-Distribution

This function computes Pr(y # X), Pr(y > X), and Pr(|y| $ |X|) for a random
variable y from a t distribution with DF degrees of freedom and noncentrality
parameter δ.

The degrees of freedom must be greater or equal to 1.0. The default value for the
noncentrality parameter δ is zero, which is used to compute a probability for the
central t distribution. The typical t tests in regression and analysis of variance use
the two-tailed probabilities of the central distribution. Nonzero values for δ are
used to compute probabilities for noncentral t distributions used for power
analysis.

T INVERSE (P, DF)


Inverse of the Student’s T-Distribution

This function computes the Students’s t test statistic from the central t distribution
with DF degrees of freedom for which the probability of a smaller value is P.

Z (X)
Standard Normal Distribution

This function computes Pr(y # X), Pr(y > X), and Pr(|y| $ |X|) for a standard
normal random variable y. A statistic with a standard normal distribution is often
referred to as a Z statistic, and hence the function name.

A standard normal distribution has a mean of zero and a variance of one. A


normally distributed statistic can be transformed to standard normal form by
subtracting the mean and then dividing by the standard deviation.

Z INVERSE (P)
Inverse of the Standard Normal Distribution

This function computes the standard normal value z for which the probability of a
smaller value is P.

Computational Three key functions are used to generate the various probabilities—the error
Notes function, the log gamma function, and the incomplete beta function. The error
function is patterned after a routine suggested by Kennedy and Gentle (1980). The
method used to calculate the log gamma function is based on Pike and Hill (1966).
The procedure for computing the incomplete beta function is described in
DiDinato and Morris (1992).

Chapter 17, Probability Functions 445


Most of the probability functions are based on relatively simple functions of these
three functions. Consult Kennedy and Gentle (1980) for further detail.

Hill (1973) is used to compute the probabilities of the normal distribution.


Probabilities for the correlation distribution are computed using Guenther (1977)
for small samples (N # 1000) and Kraemer (1973) for large samples.

Wichura (1988) is used for the inverse of the normal distribution. Hill (1970) is
used for the inverse of the t distribution. Majumder and Bhattacharjee (1973) is
used for the inverse of the F distribution.

Abramowitz and Stegun (1965) is used for the noncentral chi-squared and F
distributions. Lenth (1989) is used for the noncentral t distribution.

446 Statistix 10 User’s Manual


References

Abraham, B. and J. Ledolter. 1983. Statistical Methods for Forecasting. Wiley. New York.

Abramowitz, M. and I. A. Stegun. 1965. Handbook of Mathematical Functions, with Formulas,


Graphs, and Mathematical Tables. Dover Publications. New York.

Ahrens, J. H. and U. Dieter. 1973. Extensions of Forsythe's method for random sampling from the
normal distribution. Mathematics of Computation. 27: 927-937.

Ahrens, J. H. and U. Dieter. 1974. Computer methods for sampling from gamma, beta, poisson and
binomial distributions. Computing. 12:223-246.

Ahrens, J. H. and U. Dieter. 1982. Generating gamma variates by a modified rejection technique.
Communications of the ACM. 25:47-54.

AP-NORC Center for Public Affairs. 2013. AP-NORC Center for Public Affairs Research Poll-The
People’s Agenda: America's Priorities and Outlook for 2014. http://surveys.ap.org/data%5CNORC%-
5CAP_NORC_2014_PeoplesAgenda_Poll_Topline_Final.pdf.

Bausell, R. Barker, and Yu-Fang Li. 2002. Power analysis for experimental research: a practical guide
for the biological, medical, and social sciences. Cambridge: Cambridge University Press.

Begun, J. M. and K. R. Gabriel. 1981. Closure of the Newman-Keuls multiple comparisons procedure.
Journal of the American Statistical Association. 76:374.

Bevington, Philip R. 1969. Data Reduction and Error Analysis for the Physical Sciences.
McGraw-Hill. New York.

Bickel, P. J. and K. D. Doksum. 1977. Mathematical Statistics. Holden-Day. San Francisco, California.

Bingham, C. and S. E. Fienberg. 1982. Textbook analysis of covariance - is it correct? Biometrics.


38:747-753.

Bishop. Y. M. M., S. E. Fienberg and P. W. Holland. 1975. Discrete Multivariate Analysis. MIT Press.
Cambridge, Massachusetts.

BMDP-83. Dixon, W. J. (ed). 1983. BMDP Statistical Software - 1983 Printing with Additions.
Berkeley, California.

Bowdler, H., R. S. Martin, C. Reinsch and J. H. Wilkinson. 1968. The QR and QL algorithms for
symmetric matrices. Numerical Mathematics. 11:293-306.

Bowker, A. H. 1948. A test for symmetry in contingency tables. Journal of the American Statistical
Association. 43:572-574.

References 447
Box, G. E. P. and G. W. Jenkins. 1976. Time Series Analysis: Forecasting and Control. Holden-Day.
San Francisco, California.

Bradley, J. V. 1968. Distribution-free Statistical Tests. Prentice-Hall. Englewood Cliffs, New Jersey.

Brown, M. B. and J. K. Benedetti. 1977. Sampling behavior of tests for correlation in two-way
contingency tables. Journal of the American Statistical Association. 72:309-315.

Brown, M. B. and A. B. Forsythe. 1974. Robust tests for equality of variances. Journal of the American
Statistical Association. 69:364-367.

Burnham, Kenneth P. and David R. Anderson, 2002. Model Selection and Multimodel Inference: A
Practical Information-Theoretic Approach. 2nd ed. Springer-Verlag. New York.

Bury, Karl. 1999. Statistical Distributions in Engineering. Cambridge: Cambridge University Press.

Casagrande, J. T., M. C. Pike, and P. G. Smith. 1978a. The power function of the “exact” test for
comparing two binomial distributions. Applied Statistics. 27:2:176-180.

Casagrande, J. T., Pike, M.C., and Smith, P.G. 1978b. An improved approximate formula for
calculating sample sizes for comparing two binomial distributions. Biometrics. 34:483-496.

Chatterjee, S. and Price B. 1991. Regression Analysis by Example. 2nd ed. Wiley. New York.

Cheng, R. C. H. 1978. Generating beta variates with nonintegral shape parameters. Communications of
the ACM. 21:317-322.

Chernick, Michael R. and Christine Y. Liu. 2002. The saw-toothed behavior of power versus sample
size and software solutions: single binomial proportion using exact methods. The American
Statistician. 56:2:149-155.

Clarke, M. R. B. 1981. A Givens algorithm for moving from one linear model to another without going
back to the data. Applied Statistics. 30:198-203.

Cleveland, William S. 1979. Robust locally weighted regression and smoothing scatterplots. Journal of
the American Statistical Association. 74:829-836.

Cleveland, William S., Grosse, Eric., and William M. Shyu, 1992. Local regression models, in
Statistical Models in S, Chambers, John C. and Trevor J. Hastie (eds). Wadsworth. Pacific Grove, CA.

Cleveland, William S. 1993. Visualizing Data. Hobart Press, Summitt, New Jersey.

Cleveland, William S. 1994. The Elements of Graphing Data. Hobart Press, Summitt, New Jersey.

Cleveland, William S. 2007. Local Regression Software. Available from


http://cm.bell-labs.com/cm/ms/departments/sia/wsc/smoothsoft.html. Accessed 3/07/2007.

448 Statistix 10 User’s Manual


Cline, A. K. 1974. Scalar- and planar- valued curve fitting using splines under tension.
Communications of the ACM. 17:4:218-223.

Clopper, C. and E. S. Pearson. 1934. The use of confidence or fiducial limits illustrated in the case of
the binomial. Biometrika 26: 404-413.

Conover, W. J. 1999. Practical Nonparametric Statistics. 3rd ed. Wiley. New York.

Conover, W. J. and R. L. Iman. 1981. Rank transformations as a bridge between parametric and
nonparametric statistics. The American Statistician. 35:124-129.

Cook, R. D. 1977. Detection of influential observations in linear regression. Technometrics. 19:15-18.

Cook, R. D. 1979. Influential observations in linear regression. Journal of the American Statistical
Association. 74:169-174.

Cook, R. D. and S. Weisberg. 1982. Residuals and Influence in Regression. Chapman and Hall, New
York.

Cooper. B. E. 1968. The use of orthogonal polynomials. Applied Statistics. 17:283-287.

Cox, D. R. 1970. The Analysis of Binary Data. Chapman and Hall, London.

Crowder, M. J. and D. J. Hand. 1990. Analysis of Repeated Measures. Chapman and Hall. New York.

Daniel, C. 1976. Applications of Statistics to Industrial Experimentation. Wiley. New York.

Daniel, C. and F. Wood. 1971. Fitting Equations to Data. Wiley. New York.

Daniel, W. W., 1990. Applied Nonparametric Statistics. 2nd ed. PWS-Kent.

Demar, J., 2006. Statistical Comparisons of Classifiers over Multiple Data Sets. Journal of Machine
Learning Research.

DiDinato, A. R. and A. H. Morris. 1992. Algorithm 708: Significant digit computation of the
incomplete beta function ratios. ACM Transactions on Mathematical Software. 18:360-373.

Dineen, L. C. and B. C. Blakesley. 1973. A generator for the sampling distribution of the
Mann-Whitney U statistic. Applied Statistics. 22:269-273.

Draper, N. R. and H. Smith. 1966. Applied Regression Analysis. Wiley. New York.

Dunn, O. J. 1964. Multiple comparisons using rank sums. Technometrics, 5:241-252.

Durbin, J. and G. S. Watson. 1950. Testing for serial correlation in least squares regression I.
Biometrika. 37:409-428.

References 449
Durbin, J. and G. S. Watson. 1951. Testing for serial correlation in least squares regression II.
Biometrika. 38:159-178.

Durbin, J. and G. S. Watson. 1971. Testing for serial correlation in least squares regression III.
Biometrika. 58:1-19.

Einot, I. and K. R. Gabriel. 1975. A study of the powers of several methods of multiple comparisons.
Journal of the American Statistical Association. 70:351.

Federer, W. T. 1957. Variance and covariance analyses for unbalanced classifications. Biometrics.
13:333-362.

Fienberg, S. E. 1980. The Analysis of Cross Classified Categorical Data. MIT Press. Cambridge,
Massachusetts.

Forsythe, George E., Michael A. Malcolm, and Cleve B. Moler. 1977. Computer Methods for
Mathematical Computations. Prentice-Hall. Englewood Cliffs, New Jersey.

Gentleman, W. M. 1973. Basic procedures for large, sparse or weighted least squares problems.
Applied Statistics. 23:448-454.

Glantz, Stanton A. and Bryan K. Slinker. 1990. Primer of Applied Regression and Analysis of
Variance. McGraw-Hill. New York.

Gomez, Kwanchai A. and Arturo A. Gomez. 1984. Statistical Procedures for Agricultural Research.
2nd ed. Wiley. New York.

Golub, G. H. and C. F. Van Loan (1996). Matrix Computations. 3rd ed. Johns Hopkins University
Press.

Griffiths, William E., R. Carter Hill, and George G. Judge. 1993. Learning and Practicing
Econometrics. Wiley. New York.

Guenther, William C. 1977. Desk calculation of probabilities for the distribution of the sample
correlation coefficient. The American Statistician. 31:1:45-48.

Guenther, William C. 1977. Sampling Inspection in Statistical Quality Control. Griffin’s Statistical
Monographs, Number 37. London.

Haberman, S. J. 1972. Log-linear fit for contingency tables. Applied Statistics. 21:218-225.

Haberman, S. J. 1978. The Analysis of Qualitative Data. Vols. I & II. Academic Press. New York.

Hald, A. 1952. Statistical Theory with Engineering Applications. Wiley. New York.

Hanke, J. E. and A. G. Reitsch. 1989. Business Forecasting. Allyn and Bacon. Boston, Massachusetts.

450 Statistix 10 User’s Manual


Hanley, J.A. and B.J. McNeil. 1982. The meaning and use of the area under a receiver operating
characteristic (ROC) curve. Radiology. 143:29-36.

Heisey, D. M. 1985. Analyzing selection experiments with log-linear models. Ecology. 66:1744-1748.

Hill, G. W. 1970. Student’s t quantiles. CACM. 13:619-620.

Hill, I. D. 1973. Algorithm AS 66: The normal integral. Applied Statistics. 22:3:424-427.

Hoenig, John M. and Dennis M. Heisey. 2001. The abuse of power: the pervasive fallacy of power
calculations for data analysis. The American Statistician. 55:1:19-24.

Hollander, M. and D. A. Wolfe. 1973. Nonparametric Statistical Methods. Wiley. New York.

Hosmer, D. W. and S. Lemeshow. 1989. Applied Logistic Regression. Wiley. New York.

Howe, W. G. 1969. Two-Sided Tolerance Limits for Normal Populations-Some Improvements.


Journal of the American Statistical Association. 64:610-620.

Hsieh, F. Y., D. A. Block, and M. D. Larsen. 1998. A simple method of sample size calculation for
linear and logistic regression. Statistics in Medicine. 17:1623-1634.

Hsu, Jason C. 1996. Multiple Comparisons: Theory and Methods. Chapman and Hall. New York.

Iman, R. L. and J. M. Davenport. 1976. New approximations to the exact distribution of the
Kruskal-Wallis test statistic. Communications in Statistics. Ser. A. 5:1335-1348.

Iman, R. L. and J. M. Davenport. 1980. Approximations of the critical region of the Friedman statistic.
Communications in Statistics. Ser. A. 9:571-595.

IMSL - International Mathematical and Statistical Libraries, Inc. 1975. IMSL Library 1 Reference
Manual. IMSL, 7500 Bellaire Blvd., Floor 6, GNB Building, Houston, Texas.

Jolliffe, I. T. 1982. A note on the use of principal components in regression. Applied Statistics.
31:300-303.

Kachitvichyanukul, V. and B.W. Schmeiser. 1988. Binomial random variate generation.


Communications of the ACM. 31:216-222.

Kalbfleisch, J. D. and R. L. Prentice. 1980. The Statistical Analysis of Failure Time Data. Wiley. New
York.

Kennedy, W. J. and J. E. Gentle. 1980. Statistical Computing. Dekker. New York.

Koele, P. 1982. Calculating power in analysis of variance. Psychological Bulletin. 92:513-516.

Kraemer, H. C. 1973. Improved approximation to the non-null distribution of the correlation

References 451
coefficient. Journal of the American Statistical Association. 68:344:1004-1008.

Lee, E. T. 1992. Statistical Methods for Survival Data Analysis. 2nd. ed. Wiley. New York.

Lehmann, E. L. 1975. Nonparametrics: Statistical Methods Based on Ranks. Holden-Day. San


Francisco, California.

Lenth, R. V. 1989. Algorithm AS 243: Cumulative distribution function of the non-central t


distribution. Applied Statistics. 38:185–189.

Levene, H. 1960. Robust tests for the equality of variance. Contributions to Probability and Statistics;
Essays in Honor of Harold Hotelling. Stanford University Press.

Little, Thomas M. and F. Jackson Hills. 1978. Agricultural Experimentation: Design and Analysis.
Wiley. New York.

Liu, Xiaofeng and Stephen Raudenbush. 2004. A note on the noncentrality parameter and effect size
estimates for the F Test in ANOVA. Journal of Educational and Behavioral Statistics. 29:2:251-255.

Ljung, G. and G. E. P. Box. 1978. On a measure of lack of fit in time series models. Biometrika.
65:297-304.

Lund, R. E. and J. R. Lund. 1983. Algorithm 190. Probabilities and upper quantiles for the Studentized
range. Applied Statistics. 32:204-210.

Majumder, K. L. and G. P. Bhattacharjee. 1973. Algorithm 64. Inverse of the incomplete beta function
ratio. Applied Statistics. 22:411-414.

Manly, Bryan F. J. 1991. Randomization and Monte Carlo Methods in Biology. Chapman and Hall,
London.

Matsumoto, Makoto and Takuji Nishimura. 1998. Mersenne twister: a 623-dimensionally


equidistributed uniform pseudo-random number generator. ACM Transactions on Modeling and
Computer Simulation (TOMACS). 8:3-30.

Martin, R. S., C. Reinsch and J. H. Wilkinson. 1968. Householder’s tri-diagonalization of a symmetric


matrix. Numerical Mathematics. 11:181-195.

Maurice, S. Charles and Christopher R. Thomas. 2002. Managerial Economics. 7th ed. McGraw-Hill.
New York.

Mercier, L. J. 1987. Handbook for Time Series Analysis. AFIT. Dayton, Ohio.

McCullagh, P. and J. A. Nelder. 1983. Generalized Linear Models. Chapman and Hall. London.

Meyer, Patrick J. and Michael A. Seaman. 2011. A comparison of the Exact Kruskal-Wallis
Distribution to Asymptotic Approximations for All Sample Sizes Up to 105. University of South

452 Statistix 10 User’s Manual


Carolina. Web. 12/13/2012.

Montgomery, D. C. 1991. Introduction to Statistical Quality Control. 2nd. ed. Wiley. New York.

Morrison, D. F. 1977. Multivariate Statistical Methods. 2nd. ed. MacGraw-Hill. New York.

Murphy, Kevin R. and Brett Myors. 2004. Statistical Power Analysis: A Simple and General Model for
Traditional and Modern Hypothesis Tests. 2nd ed. L. Erlbaum Associates. Mahwah, New Jersey.

Nash, John C. 1979. Compact Numerical Methods For Computers: Linear Algebra and Function
Minimisation. Wiley. New York.

Nash, John C. 1990. Compact Numerical Methods for Computers. 2nd ed. Hilger. New York.

Nash, John C. and Mark Walker-Smith. 1987. Nonlinear Parameter Estimation. Marcel Dekker. New
York.

Nelder, J. A. and R. W. M. Wedderburn. 1972. Generalized Linear Models. Journal of the Royal
Statistical Society. 135:370-384.

Nelson, L. S. 1984. The Shewhart Control Chart - Tests for Special Causes. Journal of Quality
Technology. 16:237-239.

Neter, John, William Wasserman, and Michael H. Kutner. 1989. Applied Linear Regression. 2nd ed.
Irwin. Boston.

Newcombe, Robert G. 1998. Two-sided confidence intervals for the single proportion: comparison of
seven methods. Statistics in Medicine. 11:857-872.

O’Brien, R. G. 1981. A simple test for variance effects in experimental designs. Psychological
Bulletin. 89:570-574.

Oliver, I. 1967. Analysis of factorial experiments using generalized matrix operations. Journal of the
Association of Computing Machinery. 14:508-519.

Pearson, E. S. and H. O. Hartley. 1954. Biometrika Tables for Statisticians. Cambridge University
Press.

Pike, Malcolm and David Hill. 1966. Algorithm 291: Logarithm of Gamma Function. Communications
of the ACM. 9:684.

Pregibon, D. 1981. Logistic regression diagnostics. Annals of Statistics. 9:705-724.

Ratkowsky, David A. 1983. Nonlinear Regression Modeling. A Unified Practical Approach. Marcel
Dekker. New York.

Ratkowsky, David A. 1990. Handbook of Nonlinear Regression Models. Marcel Dekker. New York.

References 453
Rosner, Bernard. 2000. Fundamentals of Biostatistics. 5th ed. Duxbury. Belmont, CA.

Royston, J. P. 1982. Expected normal order statistics (exact and approximate). Applied Statistics.
31:161-165.

Royston, J. P. 1995. A remark on algorithm AS 181: The W-test for normality. Applied Statistics.
44:547-551.

Ryan, T. A. 1960. Significance tests for multiple comparison of proportions, variances, and other
statistics. Psychological Bulletin. 57:318-328.

Scheffe, H. 1959. The Analysis of Variance. Wiley. New York.

Searle, Shayle R. 1987. Linear Models for Unbalanced Data. Wiley. New York.

Seber, G. A. F. 1977. Linear Regression Analysis. Wiley. New York.

Shapiro, S. S. and R. S. Francia. 1972. An approximate analysis of variance test for normality. Journal
of the American Statistical Association. 67:215-216.

Shapiro, S. S. and M. Wilk. 1965. An analysis of variance test for normality. Biometrika. 52:591-611.

Siegel, S. and N. J. Castellan. 1988. Nonparametric Statistics for the Behavioral Sciences. 2nd ed.
McGraw-Hill. New York.

Simon, R. and Y. Lee. 1982. Non parametric confidence limits for


survival probabilities and the median. Cancer Treatment Reports. 66:37-42.

Smirnov, N. V. 1939. Estimate of deviation between empirical distribution functions in two


independent samples. Bulletin Moscow University. 2(2):3-16.

Snedecor, G. W. and W. G. Cochran. 1980. Statistical Methods. 7th ed. The Iowa State University
Press. Ames, Iowa.

Steel, Robert G. D. and James H. Torrie. 1980. Principles and Procedures of Statistics: A Biometrical
Approach. 2nd ed. McGraw-Hill. New York.

Thomopoulos, N. T. 1980. Applied Forecasting Methods. Prentice-Hall. Englewood Cliffs, New


Jersey.

Velleman, P. and D. Hoaglin. 1981. ABC’s of EDA. Duxbury Press.

Wagenmakers, E. and S. Farrell. 2004. AIC model selection using Akaike weights. Psychonomic
Bulletin & Review. 11(1):192-196.

Weisberg, S. 1982. MULTREG User’s Manual. Technical Report #298, School of Statistics, University
of Minnesota. St. Paul, Minnesota.

454 Statistix 10 User’s Manual


Weisberg, S. 1985. Applied Linear Regression. 2nd ed. Wiley. New York.

Weisberg, S. and C. Bingham. 1975. An analysis of variance test for normality suitable for machine
calculation. Technometrics. 17:133.

Welch, B. L. 1947. The generalization of "student's" problem when several different population
variances are involved. Biometrika, 34:28–35.

Welch, B. L. 1951. On the comparison of several mean values: an alternative approach. Biometrika,
38:330-336.

Welsch, R. E. 1977. Stepwise multiple comparison procedures. Journal of the American Statistical
Association. 72:359.

Wichura, Michael J. 1988. The percentage points of the normal distribution. Applied Statistics. 37:477-
484.

Zar, J. H. 1999. Biostatistical Analysis. 4th ed. Prentice-Hall. Englewood Cliffs, New Jersey.

References 455
Index

2SLS 211 ANSI strings 13, 24


Abs function 54 Arccos function 54
Add new variables Arccosh function 54
import 72 Arcsin function 54
insert variables 27 Arcsin-square root transformation 54
merge 66 Arcsinh function 54
Adjusted R² 182, 184 Arctan function 54
AICc 183 Arctanh function 54
Akaike weights 188 ARIMA 344
Akaike's Information Criterion 183 Arithmetic expressions 49
All-pairwise comparisons 251 Association tests 305
Alphanumeric data 13, 27 Atkinson function 54
Analysis of variance 218 Attributes control chart 352
analysis of covariance example 247 Autocorrelation 325, 333
balanced lattice design 226 Autocorrelation in regression 173
Cat function 222 Autocorrelation, partial 335
completely randomized design 219 Automatic format 46, 79, 87
contrasts 261 Backup files 19
error terms 246 Balanced lattice design 226
factorial design 229 Bar chart 112
Friedman two-way 151 Best subset regressions 187
general AOV/AOCV 243 Beta probability function 442
GLM 218 Bonferroni's multiple comparisons 255
homogeneity of variance 143, 220 Boolean expressions 50
Kruskal-Wallis one-way 147 Box and whisker plot 115
Latin square design 224 Box Jenkins 330, 344
marginal sums of squares 218 Box plot 115
means and standard errors 249 Box’s test 242
missing values 218 Breakdown 99
multiple comparisons 251 Brown and Forsythe test 143, 220
multiple error terms 246 Built-in functions 53
one-way 142 C chart 358
plots 265 c-statistic 196
polynomial contrasts 264 Capitalize function 55
pooling sums of squares 247 Case function 55
randomized complete block 221 Cat function 55
repeated measures design 238 Ceiling function 55
residuals 267 Censored data 376
simple effects 259 Chi-square probability function 443
split-plot design 232 Chi-square test 307
split-split-plot design 236 Chi-square, inverse function 443
strip-plot design 234 Close file 62
strip-split-plot design 237 Coefficient of variation 92, 220, 233
Angle function 54 Collinearity 170, 216

456 Statistix 10 User’s Manual


Colors 19 saving 64
Column format 45 Data entry 23, 27
Column width 45 Data menu 22
Comment lines in text files 77 Data set label 47
Completely randomized AOV 219 Data set size 13
Concatenate strings 50 Data types 13, 27
Conditioning plot 125 Date arithmetic 50
Confidence interval of mean 92 Date data type 13
Contingency tables Date format 18
chi-square test 307 Date function 34, 56
cross tabulation 97 Date variables 24, 27
log-linear models 318 Day function 56
Mantel-Haenzel text 385 Dayofweek function 56
McNemar's symmetry test 313 Decimal format 46, 79, 87
two by two 316 Decision interval cusum 56
Contrasts 261 Delete
Control chart 352 cases 29
Control limits 352 highlighted cells 30
Converting strings omitted cases 30
to dates 34, 56 variables 29
to numbers 34 Delete function 56
Cook's distance 177 Descriptive statistics 92
Coplot 125 Deviance tests 196, 205, 208
Copy 25 Dialog boxes 15
Correlation coefficient probability function 443 Diff function 56
Correlations 164 Dot plot 109
partial 166 drag-and-drop 16
Pearson 164 Dummy variables 36, 204
Spearman 322 Duncan's multiple comparisons 255
Cos function 56 Dunn's multiple comparisons 149
Cosh function 56 Dunnett's multiple comparisons 256
Count function 56 Durbin-Watson test 172
Covariates 247 Edit menu 25
Cox's F test 381 Eigenvalues-principal components 214
Cox-Mantel test 381 Error bar chart 112
Cox’s proportional hazards regression 387 EWMA chart 372
Cp 183, 188 Excel files
Cronbach's alpha 165 export 78
Cross correlation plot 336 import 72
Cross tabulation 97 view 83
Cubic spline 269 Exiting Statistix 89
Cumsum function 56 Exp function 56
Cumulative distribution plot 107 Exponential format 47, 79, 87
Cut 25 Exponential smoothing 340
CV 92 Export
Data Excel file 78
printing 86 text file 78

Index 457
F-distribution, inverse function 444 save 11
F-probability function 443 view 82
Factorial design 229 Greenhouse-Geisser test 240, 243
Factorial function 56 Grid lines 20
Field width 79, 87 G² statistic 208
File info report 84 Hazard function 376
File menu 61 Hazard function plot 380
Files Heterogeneity, test of 307
export 78 Histogram 105
import 72 Homogeneity of variances
log 71 completely randomized design 220
merge cases 66 hypothesis test 160
merge labels, transformations, etc. 67 one-way AOV 143
merge variables 66 two-sample t test 138
open 62 Homogeneity, test of 307
save 64 Hosmer-Lemeshow statistic 197
summary file 68 Hsu's multiple comparisons 258
view 81 Huynh-Feldt test 240
Fill 31 Hypergeometric probability function 444
Fitted values, regression 173 Hypothesis tests
Fixed format 46, 79, 87 one correlation 160
Floor function 56 one mean 157
Fonts one proportion 155
plots 20 one variance 159
report 19 two correlations 162
spreadsheet 19 two means 158
Forecasts two proportions 156
ARIMA 348 two variances 160
exponential smoothing 340 I chart 368
moving averages 337 Import
Format statement Excel file 72
export 79 format statement 75
import 75 single variable 77
print 87 text file 74
Fraction nonconforming 354 Indicator variables 36, 204
Fractional factorial design 230 Infinite parameter estimates 210
Frequency distribution 93 Insert
Friedman all-pairwise comparisons 153 cases 28
Friedman two-way AOV 151 variables 27
Functions 53 Insert function 56
Gehan-Wilcoxon test 381, 383 Installing Statistix 8
General AOV/AOCV 243 Integer format 79, 87
Generalized linear models 208 Integer variables 13, 24, 27
Geomean function 56 Interpolation 269
Goodness-of-fit tests 305 Iterative proportional fitting 319
Gram-Schmidt 40 Iterative reweighted least squares 194
Graph file Kaplan-Meier 377

458 Statistix 10 User’s Manual


Kendall's coefficient of concordance 151 Lowcase function 57
Kendall's tau 324 LSD 254
Kolmogorov-Smirnov test 311 M function 57
Kruskal-Wallis one-way AOV 147 Mallow's Cp statistic 182, 183, 188
Labels 47 Mann-Whitney U statistic 140
Lag function 56 Mantel-Haenzel test 385
Latin square design 224 Mauchly’s statistic 240
Lattice design 226 Max function 57
Least significant difference 254 Maximum 92
Length function 56 Maximum likelihood 193
Levene's test 143, 220 McNemar's symmetry test 313
Leverage, regression 174, 177 Mean
Likelihood ratio tests 319 analysis of variance 249
Linear contrasts 261 breakdown 99
Linear models 163 descriptive statistics 92
Linear regression 169 dot plot 109
best model selection 184 error bar chart 112
coefficients 170 Mean function 57
Durbin-Watson test 172 Median 92, 103
forced through origin 169 Median absolute deviation 92
missing values 179 Median function 57
predicted values 173 Median survival time 379
residuals 176 Median test 140
sensitivity 180 Menus 8, 9
stepwise 189 Merge
stepwise AOV table 182 cases 66
variance-covariance 183 labels, transformations, etc. 67
weighted 169 variables 66
Linear spline 269 Min function 57
Ln function 56 Minimum 92
Loess 291 Missing values 14, 24
analysis of variance 298 arithmetic and logical expressions 52
plots 300 M function 57
predicted values 299 Modulo function 57
residuals 304 Month function 57
Log file 71 Moving averages 337
Log function 57 Moving range chart 370
Log-linear models 318 MR chart 370
Logical expressions 49, 50 Multi-sample survival tests 383
Logical operators 51 Multicollinearity 170, 216
Logistic regression 193, 207 Multiple comparisons 251
classification table 197 Multiple regression 169
Hosmer-Lemeshow statistic 197 Negative binomial probability function 444
odds ratios 198 Nested breakdown 99
stepwise 200 New 62
Logit transformation 194 Nonadditivity in analysis of variance 222
Logrank test 381, 383 Noncentrality parameter 392

Index 459
noncentrality parameter of F-distribution 400 Poisson probability function 444
Nonconformities 358 Poisson regression 202, 207
Nonconformities per unit 360 Polynomial contrasts 264
Nonlinear models 269 Polynomial regression 271
Nonlinear regression 279 plots 276
minimization history 286 predicted values 275
models 280 residuals 278
plots 287 results menu 275
predicted values 287 variance-covariance 279
residuals 290 Pos function 57
variance-covariance 291 Power analysis 390
Nonparametric tests 129 completely randomized design 402
Nonsphericity 240, 421 factorial design 408, 410
Normal probability plot 328 Latin square design 406
Normality test 327 linear regression 432, 434
Normalize function 57 logistic regression 436
Np chart 356 one correlation 429
Number function 34 one proportion 425
O'Brien's test 143, 220 one-sample t test 391
Odds ratios 198 one-way AOV 402
Omit/select/restore cases 41 paired t test 391
One-sample t test 130 RCB design 405
One-way AOV 142 repeated measures 420, 422
Open 62 split-plot design 411
Ordering variables 45 split-split-plot design 417
Outlier in regression 178 strip-plot design 414
P chart 354 two correlations 431
Paired t test 131 two proportions 428
Pareto chart 353 two-sample t test 395
Partial autocorrelation 335 Power function 57
Partial correlations 166 Power of multiple comparisons 401
Paste 25 Power of the F test 398
Pearson correlations 164 Precedence rules 50, 52
Percentile function 57 Predicted values, regression 173, 176
Percentiles 103 Principal components 216
Peto-Wilcoxon test 381, 383 Print
Pi function 57 data 86
Pie chart 114 plot 86
plot titles 21 report 85
Plots Probability functions 441
colors 19 Probit regression 199
fonts 20 Product limit estimates 377
line patterns 20 Proportion test 155
print 86 Proportional hazards regression 387
save 11 coefficient table 388
symbols 20 likelihood test 389
titles 21 variance-covariance 389

460 Statistix 10 User’s Manual


Proportions 193, 207, 310 polynomial regression 278
Q-Q plot 119 regression 176
Quality control 352 Restore 41
Quantile plot 117 Results menu 17
Quartiles 92 Results page 17
R chart 365 Round function 58
Random function 57 Row functions 58
Randomized complete block design 221 RowCount function 58
Randomness test 325 RowMax function 58
Rank correlations 322 RowMean function 59
Rank function 58 RowMedian function 59
Rank sum test 138 RowMin function 59
Rankit plot 328 RowSD function 59
Real data type 13 RowSum function 59
Real variables 24, 27 Runs test 325
Recode 35 R² 184, 188
References 447 S chart 367
Regression Sample data 63
best subsets 187 SARIMA 344
linear 169 Save 64
loess 291 plots 11
logistic 193 reports 11
nonlinear 279 Save As 64
Poisson 202 Saving
polynomial 271 data 64
stepwise linear 189 Scatter plot 122
stepwise logistic 200 Scatter plot matrix 124
Regression coefficients 170 Scheffe's multiple comparisons 255
Regression options 171 Scientific notation 47
Relational operators 51 SD function 59
Reopen 63 SelCase function 59
Reordering variables 45 Select cases 41
Repeated measures design 238 Sensitivity, regression coefficients 180
Report font 19 Settings 17
Reports general 18
print 85 line patterns 20
save 11 plot colors 19
Residual plots plot fonts 20
analysis of variance 265 plot symbols 20
loess 300 report font 19
nonlinear regression 287 spreadsheet font 19
polynomial regression 276 Shapiro-Wilk normality test 327
regression 175 Sidak's multiple comparisons 255
Residuals Sign test 132
analysis of variance 267 Simple effects 259
loess 304 Sin function 59
nonlinear regression 290 Sinh function 59

Index 461
Slopes 170 paired 131
Smirnov test 311 two-sample 136
Somers’ D 196 Tan function 59
Sorting cases 43 Tanh function 59
SPC 352 Template 67
Spearman correlations 322 Tension spline 269
Sphericity 240 Text files
Spline interpolation 269 comment lines 77
Split-plot design 232 export 78
Split-split-plot design 236 import 74
Spreadsheet window 23 view 81
Sqr function 59 Time series 330
Sqrt function 59 Time series plot 331
Stack variables 37 exponential smoothing 343
Standard deviation 92 SARIMA 349
Standard error of mean 92, 249 Titles 21
Standard normal distribution, inverse function Tolerance intervals 374
445 Toolbar 10
Standard normal probability function 445 Transformations 31
Standardized residual, regression 177 converting variable types 34
Statistical process control 352 date constants 33
Stem and leaf plot 101 equality tests 52
Stepwise linear regression 189 functions 53
Stepwise logistic regression 200 if-then-else 33
String arithmetic 50 missing values 34
String data type 13 omitted cases 35
String function 59 simple assignment 32
String variables 24, 27 string constants 33
Strip-plot design 234 Transpose 39
Strip-split-plot design 237 Trunc function 60
Student's t distribution, inverse function 445 Tukey's multiple comparisons 255
Student's t probability function 445 Tukey's nonadditivity test 222
Student-Newman-Keuls 255 Two by two contingency tables 316
Studentize function 59 Two stage least squares regression 211
Sum function 59 Two-sample survival tests 381
Summary file 68 Two-sample t test 136
Summary plots 90 U chart 360
Summary statistics 104 Unicode string 13
Survey frequency 96 Unicode strings 24
Survival analysis 376 Unitize function 60
Survival time 376 Unstack variables 38
median 379 Unusualness values 174
percentiles 379 Upcase function 60
Survivorship function 376 Value labels 48
Survivorship function plot 380 Var1 .. Var99 syntax 27
T test Variable format 45
one-sample 130 Variable labels 47

462 Statistix 10 User’s Manual


Variable name order 18
Variable name selection 16
Variable names 12
Variable order 45
Variable types 13, 24, 27
Variables control chart 352
Variance function 60
Variance inflation factor 170
Variance-covariance 167
View
Excel file 83
graph file 82
Statistix file 84
text file 81
Weighted least squares 169
Weighted regression 169
Welch's ANOVA 144, 221
Wilcoxon rank sum test 138
Wilcoxon signed rank test 134
X bar chart 362
X chart 368
Year function 60
ZInverse function 60
ZProb function 60

Index 463

You might also like