You are on page 1of 5

Review Chaps 1-2

Chapter 1
Inference: When we make an inference we are drawing a conclusion from incomplete
information. The science of statistics is to formulate rules about the validity of these
inferences. The realm of valid inferences is called the scope of inference.

Causal Inference can be justified only from a randomized experiment. A

randomized experiment is defined as an experiment where the investigator uses some
randomization mechanism to assign experimental units to groups.
Another kind of experiment is the observational study. While we can obtain
much interesting and important information from an observational study, we can not infer
cause and effect and must be very careful as to what conclusions we draw.
Randomization in an experiment helps us control for confounding variables. We cannot
be 100% sure that we have accounted for all confounding variables in an experiment.
However, randomization gives us the best chance that what we observe is caused by the
treatments and not by unobserved variables that are related to both group membership
and the response variable.
Example: Creativity Experiment
The investigator randomly assigns students in creative writing at her university to
one of two groups—the intrinsic group, who completes a questionnaire focusing on
intrinsic rewards for creativity, and an extrinsic group, who completes a questionnaire
focusing on extrinsic rewards for creativity. This is a randomized experiment, even
though the students are not a random sample from the population of all creative writers.
We can infer cause and effect from the results, however, we can not extrapolate to a
broader population than that of the students we are studying.

Population Inference can be justified only from a random sample. Only when
we take a random sample from the population do we have mathematical models for
quantifying the behavior of our population estimates. Randomization gives us the highest
probability that what we sample is representative of the population proportions and
distribution.
Researchers measured the lead content in teeth and the IQ score for 3,229
children attending first and second grade in the period 1975 and 1978.

What is the value of this study?

Null and Alternative Hypotheses: The questions that we would like to ask of our data
need to be translated into questions about the parameters in the probability model we are
using. This is most commonly done using hypothesis testing, a formulized approach to
inference from a random sample. In the creativity study, the question “Is there a
treatment effect?” is translated in to:

Model: Y*=Y + δ

Where Y is a subject’s score without taking the questionnaire and Y* is the score after,
and δ measures the difference. Hence, δ is the treatment effect and our hypothesis test is
then:

H0: δ=0 versus Ha: δ ≠0

At test statistic is a numerical quantity that we calculate from our data to test the
hypotheses of interest. For the creativity data, the test statistic is the difference of the
averages between the two treatment groups. In order to use the test statistic we have to
know its sampling distribution, or its randomization distribution.

P-value: is the probability in a randomized experiment that the observed test statistic, or
one more extreme, is due to the randomization alone. In order to calculate this value we
need to know the mathematical curve of the sampling distribution of the test statistic or to
conduct simulation or resampling.

Confidence Intervals: provide a measure of precision of our estimate. A 95% confidence

interval is the interval in which 95% of the time, the true parameter will lie, if we were to
repeat our experiment a large number of times under the exact same conditions. The
general form of the confidence interval is
statistic ± critical value * SE(statistic)

Chapter 2

Normal Model: A probability distribution. Most of the analyses in this course will use
the normal model, or assume that the underlying distribution is regular and symmetric
enough that the sample estimates are approximately normal, and hence the test statistics
are either normal or t.

One Sample t Test: a test about the population mean where the test statistic has a t
distribution. A true one-sample t test is rarely performed. Often we are interested in the
average difference between paired observations in a sample. In this case, under certain
circumstances, the scaled difference of the two sample averages has a t distribution and
hence we use the same methods as in the one sample t test.
Standard error: of a statistic is an estimate of the standard deviation of its
sampling distribution.

Example: the standard error of the sample average is SE (Y ) =

s
. Where
n
n 2

∑( y − y )
i
s =
2 i =1
n −1

Z and t - ratios: the ratio of an estimate’s error to its standard error is a convenient
test statistic. When the standard deviation is known, we use the Z-ratio. When the
standard deviation is estimated, we use the t - ratio. Most often we will need to use an
estimate of the standard error and hence we will most often use the t-ratio.

y − µ0
Example: t = is the t-ratio used to test the hypothesis that the true mean
SE ( y )
is µ0 in a one sample t-test.

y1 − y2 − 0
Example: t = is the t-ratio to test the hypothesis that the true
SE ( y1 − y2 )
difference between paired observations is zero.

Example: Schizophrenia Study

Researchers examined 15 pairs of monozygotic (identical) twins where one twin
had schizophrenia and the other did not. They measured the volumes of several regions
inside the twins’ brains using MRI. The observed average difference in the means was
0.199 cm3 and the sample of differences had a standard deviation of 0.238 cm3.

What is the confidence interval?

Can the difference be attributed to chance alone?

Two sample t tests: In addition to the normal or near-normal assumption, for the two
sample t test we also require the assumption of independent samples. The observations
within a sample must be both independent of each other and of all observations in the
other sample. The two-sample test statistic is given by

t=
( y1 − y2 ) − ( µ1 − µ2 )
SE ( y1 − y2 )

Where
s12 s22
SE ( y1 − y2 ) = + when equal variances cannot be assumed. And is given by
n1 n2

SE ( y1 − y2 ) =
s 2p
+
s 2p
= sp
1 1
+ where s p =
2 ( n1 − 1) s12 + ( n2 − 1) s22
when equal
n1 n2 n1 n2 n1 + n2 − 2
variances can be assumed.

Note: The course text uses the pooled estimate of the standard error because this is the
estimate used in ANOVA, an extension of the t-test. As mentioned in 444, the unpooled
estimate is considered superior for most tests. However, we cannot use the unpooled in
the ANOVA situation and so we review the pooled estimate here.

Example: Sparrow humerus length and survival

Investigators wanted to test whether selection pressure is acting on sparrow humerus
length. After a severe cold snap, they measured humerus length of 24 sparrows that died
and 35 that survived.

Group N Average (in.) SD(in)

Died 24 0.72792 0.02355
Lived 35 0.73800 0.01984

What is the estimate of the difference?

What are the degrees of freedom?