Professional Documents
Culture Documents
2. Sample Survey
Sampling is selection of a subset within a population, to yield some knowledge about the
population of concern. The three main advantages of sampling are that (i) the cost is lower, (ii)
data collection is faster, and (iii) since the data set is smaller, it is possible to improve the
accuracy and quality of the data.
3. Experiment
Experiments are performed when there are some controlled variables (like certain treatment in
medicine) and the intention is to study their effect on other observed variables (like health of
patients). One of the main requirements to experiments is the possibility of replication.
4. Observational study
An observational study is appropriate when there are no controlled variables and replication is
impossible. This type of study typically uses a survey. An example is one that explores the
correlation between smoking and lung cancer. In this case, the researchers would collect
observations of both smokers and non-smokers and then look for the number of cases of lung
cancer in each group.
1
d. Possible sources of errors and biases should be controlled.
The population of concern as a whole may not be available for a survey. Its subset of items
possible to measure is called a sampling frame (from which the sample will be selected). The
plan of the survey should specify a sampling method, determine the sample size and steps for
implementing the sampling plan and, finally, sampling and data collecting.
a. Nonprobability sampling
Nonprobability sampling is any sampling method where some elements of the population have
no chance of selection or where the probability of selection can't be accurately determined. The
selection of elements is based on some criteria other than randomness. These conditions give rise
to exclusion bias, caused by the fact that some elements of the population are excluded.
Nonprobability sampling does not allow the estimation of sampling errors. Information about the
2
relationship between sample and population is limited, making it difficult to extrapolate from the
sample to the population.
Example: We visit every household in a given street, and interview the first person to answer the
door. In any household with more than one occupant, this is a nonprobability sample, because
some people are more likely to answer the door (e.g. an unemployed person who spends most of
their time at home is more likely to answer than an employed housemate who might be at work
when the interviewer calls) and it's not practical to calculate these probabilities.
One example of nonprobability sampling is convenience sampling (customers in a supermarket
are asked questions). Another is quota sampling, when judgment is used to select the subjects
based on specified proportions. For example, an interviewer may be told to sample 200 females
and 300 males between the age of 45 and 60.
In addition, nonresponse effects may turn any probability design into a nonprobability design if
the characteristics of nonresponse are not well understood, since nonresponse effectively
modifies each element's probability of being sampled.
b. Probability sampling
Probability sampling is one in which every unit in the population can be randomly selected, and
the probability of selecting can be accurately determined.
3
d. Systematic sampling
Systematic sampling relies on dividing the target population into strata (subpopulations) of equal
size and then selecting randomly one element from the first stratum and corresponding elements
from all other strata. A simple example would be to select every 10th name from the telephone
directory, with the first selection being random. SRS may select a sample from the beginning of
the list. Systematic sampling helps to spread the sample over the list.
As long as the starting point is randomized, systematic sampling is a type of probability
sampling. 'Every 10th' sampling is especially useful for efficient sampling from databases.
However, systematic sampling is especially vulnerable to periodicities in the list. Consider a
street where the odd-numbered houses are all on one side of the road, and the even-numbered
houses are all on another side. Under systematic sampling, the houses sampled will all be either
odd-numbered or even-numbered. Another drawback of systematic sampling is that even in
scenarios where it is more accurate than SRS, its theoretical properties make it difficult to
quantify that accuracy.
Systematic sampling is not SRS because different samples of the same size have different
selection probabilities - e.g. the set {4,14,24,...} has a one-in-ten probability of selection, but the
set {4,13,24,34,...} has zero probability of selection.
e. Stratified sampling
Where the population embraces a number of distinct categories, the frame can be organized by
these categories into separate "strata." Each stratum is then sampled as an independent sub-
population. Dividing the population into strata can enable researchers to draw inferences about
specific subgroups that may be lost in a more generalized random sample. Since each stratum is
treated as an independent population, different sampling approaches can be applied to different
strata. However, implementing such an approach can increase the cost and complexity of sample
selection.
A stratified sampling approach is most effective when three conditions are met:
i. Variability within strata are minimized
ii. Variability between strata are maximized
iii. The variables upon which the population is stratified are strongly correlated with the
desired dependent variable (beer consumption is strongly correlated with gender).
f. Cluster sampling
Sometimes it is cheaper to 'cluster' the sample in some way, e.g. by selecting respondents from
certain areas only, or certain time-periods only. Cluster sampling is an example of 'two-stage
random sampling': in the first stage a random sample of areas is chosen; in the second stage a
random sample of respondents within those areas is selected. Cluster sampling works best when
each cluster is a small copy of the population.
This can reduce travel and other administrative costs. Cluster sampling generally increases the
variability of sample estimates above that of simple random sampling, depending on how the
clusters differ between themselves, as compared with the within-cluster variation. If clusters
chosen are biased in a certain way, inferences drawn about population parameters will be
inaccurate.
4
g. Matched random sampling
In this method there are two samples in which the members are clearly paired, or are matched
explicitly by the researcher (for example, IQ measurements or pairs of identical twins).
Alternatively, the same attribute, or variable, may be measured twice on each subject, under
different circumstances (example: the milk yields of cows before and after being fed a particular
diet).
b. Random assignments
The second fundamental design principle is randomization of allocation of (controlled variables)
treatments to units. The treatment effects, if present, will be similar within each group.
c. Replication
All measurements, observations or data collected are subject to variation, as there are no
completely deterministic processes. To reduce variability, in the experiment the measurements
must be repeated. The experiment itself should allow for replication, to be checked by other
researchers.
5
3. Sources of bias and confounding, including placebo effect and blinding
Sources of bias specific to medicine are confounding variables and placebo effects, among
others.
a. Confounding
A confounding variable is an extraneous variable in a statistical model that correlates (positively
or negatively) with both the dependent variable and the independent variable. The methodologies
of scientific studies therefore need to control for these factors to avoid a false positive (Type I)
error (an erroneous conclusion that the dependent variables are in a causal relationship with the
independent variable).
Example. Consider the statistical relationship between ice cream sales and drowning deaths.
These two variables have a positive correlation because both occur more often during summer.
However, it would be wrong to conclude that there is a cause-and-effect relation between them.
c. Blocking
Blocking is the arranging of experimental units in groups (blocks) that are similar to one another.
Typically, a blocking factor is a source of variability that is not of primary interest to the
experimenter. An example of a blocking factor might be the sex of a patient; by blocking on sex
(that is comparing men to men and women to women), this source of variability is controlled for,
thus leading to greater precision.