You are on page 1of 2

Stats Summary

One-variable statistics
- this is the bulk of the course!
- Looks at the counts of how often something occurs – either done as frequency or proportion
- At the most basic compare centres (mean or median), shape (symmetry, peaks, tails, etc.), spread (range, interquartile range, standard
deviation) and outliers
- For inference comparing a population distribution (H0) to a sample distribution (Ha) using either a z-critical or t-critical value
- The decision to accept or reject the null hypothesis is based on a level of significance (ɑ)
One sample One-sample z

Known σ Matched pairs One-sample z for


differences
Mean µ Two sample Two-sample z

One sample One-sample t

Unknown σ Matched pairs One-sample t for


differences
Population parameter Two sample Two-sample t

One sample One-sample z

Proportion p Matched pairs One sample z within pairs

Two sample Two-sample z

One variable statistics also apply to two way tables where we use χ2 tests and look at the difference between observed and expected values using
proportions.

Two Variable Statistics

Not nearly as much time was spent on two variable statistics – chapters 3,4 and 14. The same concepts of sample, sampling distribution and
population apply. Sample data allow us to calculate a line of best fit. With many samples each has a different line of best. These lines are used to
create the true line – like a population line of best fit. Because this line is created from many samples there is variation, or standard error, in the slope,
y-intercept and therefore the line. Fortunately we do not need to calculate most of these but statistical software is used to do the calculations. We are
just required to understand the concept of sampling to get the true line and that there is therefore error, variation, in any of the parts.

You might also like