You are on page 1of 12

ntroduction  

Epidemiology
is the medical
discipline
which deals
with the
occurrence,
causes and
prevention of
disease. Its
methodology,
used in public
health to
study
outbreaks of
disease and
to design
preventive
measures, is
widely applied
in sports
medicine to
injury rather
than illness or
disease. The
epidemiologic
al approach
contributes
much to better
understanding
of the
incidence and
causes of
injuries and
allows
planning of
prevention
programs and
the proper
allocation of
medical
resources.

An
understanding
of the
implications of
assumptions
inherent in the
statistical
methods
underlying
epidemiologic
al methods is
necessary if
some
common
pitfalls are to
be avoided
(e.g. no clear
hypothesis
under test,
poor definition
of injury type,
inappropriate
controls,
population
under study
not defined,
over-
generalization
of results).
Snow skiing
injuries have
been
extensively
studied as has
American
football
(because of
the high
incidence of
head and
spinal
injuries).
Interventional
studies now
being
published
examine the
role of knee
and ankle
braces in
injury
prevention.
With time
more multi-
centre studies
will be
planned to
gather data on
the effects of
interventional
measures.

  Investigating sports injuries  

Epidemiological approaches to sports injuries may be descriptive or analytical (1,2).


Descriptive studies define the problem in terms of incidence and prevalence. Analytical
studies seek to identify risk factors with the goal of doing something about the injury rate, or
to evaluate the effectiveness of treatment regimes.
Incidence and prevalence Incidence (rate) of injury is the number of cases per unit time. The
rate of injury is measured as the number of injuries or injured athletes (note the significance
of the distinction in relation to multiple injuries which may be of different type) over a
specified period, and may be expressed in absolute or relative terms (e.g. 3.0 skiing injuries
per 1000 skier days). The risk of injury (the probability that an individual will be injured) is
measured in the exposed population as a cumulative incidence giving the proportion injured,
by actuarial methods (difficult) or from incidence densities. These two parameters provide the
basis for most studies of sports injuries.
Risk factors Identification of risk factors provides a means for doing something about the
sports injury problem. Both the observational and experimental approaches of analytical
epidemiology are used (3).

Observational study designs are of three main types:


Case-control (Fig 1) - the injured group is compared with a non-injured group in relation to a
potential risk factor. Such studies are retrospective, easy to conduct and commonly used, but
careful matching of controls is important. The possible sources of bias, role of sampling
vagaries and confounding variables must be carefully assessed.
Cohort (Fig. 2) - similar design, but prospective in that groups exposed or not exposed to a
potential risk factor are recognized before injury and then followed through time. This
approach is less susceptible to information bias (see below). data collection takes longer and
the method is more expensive to implement . Variations include surveillance designs
(continuous monitoring of a group of athletes as under the National Athlete Injury/Illness
Reporting System in the USA or NEISS). In survival designs, survival curve analysis is used
to follow the reduction in proportion of uninjured/injured over the study period (as in follow-up
of orthopaedic joint replacement).
Cross-sectional (Fig. 3) - documents injuries and risk factors at one point in time, describing
prevalence and injury patterns. This approach is of limited value where rehabilitation times
after injury are long.

Experimental study designs are interventional. Subjects are assigned randomly to treatment
or control groups (e.g. prophylactic wearing of ankle splints in basketball). Ethical problems
may arise in relation to allocation/withholding of potentially useful treatment.
Effectiveness of treatment The effectiveness of treatment is best studied in randomized
clinical trials (Fig. 4) which should be double blinded as compliance with protocol is otherwise
difficult to achieve. The study plan covers selection of patients (inclusion and exclusion
criteria must be clearly defined), random allocation of treatments, treatment and analysis.

  Steps in study design and implementation  

Appropriate study design may at first appear daunting to the sports medicine doctor. It is best
practice to enlist collaboration with an epidemiologist or statistician before embarking on an
investigation to avoid the frustration of having your work rejected on the grounds of unsound
design, incorrect statistical analysis or simply failure to prove an hypothesis (hindsight often
shows that this would have been possible with proper planning to ensure adequate statistical
power, see below). Plan the investigation with definite objectives in view by formulating
test(s) of a working hypothesis (see discussion of null hypotheses below). Know what
questions you are asking and why they are relevant in the context of treatment or prevention.
Data crunching of large amounts of information 'dredging' for statistically significant
relationships of no particular medical significance may be ridiculous and perhaps unethical.
Key stages in planning research on sports injuries include:
Definition of injury - specification of the particular injury type(s) to be considered will start the
investigation. Usually injury is defined as being serious enough to need medical attention
from a doctor.
Diagnostic tests needed for the study should be assessed for accuracy or predictive value
(ability to pick-up the condition). This depends on (1) Sensitivity, measured as the fraction of
people with the condition who are actually identified as positives by the test and (2)
Specificity, measured as the fraction of people without the condition who are identified as
negatives by the test. Predictive value should ideally be near 100% (achieved when both
sensitivity and specificity are near 100%). The kappa coefficient estimates interobserver
reliability (two or more observers using the same test get the same result). Intraobserver
error is a measure of the consistency of one observer over multiple tests.
Injury recording requires a numerator (the number of injuries) and denominator (the
population at risk, e.g. the number of persons skiing during a specified time period). The
population time is the number of participants at risk by the time exposed to potential injury
(usually 1000 player games). In skiing the unit is 1000 skier-days (1000 persons skiing for 1
day). These units provide a basis for comparisons between studies and between sports.
Controls must be similar to the study group, using the same inclusion and exclusion criteria
(apart from injury).

Bias is systematic error (usually unintentional) resulting in inaccuracy which must be


minimized. Such errors may arise from the way that subjects or controls are chosen
(selection bias) or measured (information bias), or from confounding variables. Common
sources are recall bias (in case-control studies arising from retrospective recall of risk
factors); follow-up bias (in cohort studies where players leaving the study differ from those
remaining) and historical bias (where 'historical controls' in sequential periods are not aware
of other changes).
Confounders (confounding variables) are systematic factors associated with the study
variable (e.g. risk factor) and the occurrence of injury in such a way as to obscure the true
relationship between study variable and injury. The confounder may itself be another risk
factor. Features of experimental design may help to mitigate difficulties caused by
confounding variables. For instance, in stratified trials , subjects are divided according to one
or more of the variables concerned (such as age, gender, smoker/non-smoker) and subjects
in each of these groups are then randomly allocated to control or treatment. The effect of the
grouping variable (potential confounder) is thus eliminated.
Informed consent of participating patients must be arranged as appropriate in relation to the
nature of the investigation before the study can be commenced. This is especially important
where treatment alternatives are planned. Where the study may identify particular persons or
ethnic/cultural groups, especially in publications dealing with the results, approval of the
project by the appropriate community leaders may be required (usually monitored by the
research/ethics committees of the researcher's institution. Many journals now require
evidence that such approvals were obtained before considering the results of such studies
for publication.

Pilot studies (small scale preliminary investigations) are often helpful or even essential.
Commencement of a definitive study will typically require approval or award of peer-reviewed
competitive funding as well as research/ethics committee approval from the sponsoring
institution. Both will ordinarily necessitate assessment of the statistical power (see below) of
the proposed investigation. Such calculations require estimation of the expected magnitude
and variance of the differences between the groups being compared, and hence the scale of
the anticipated response to treatment. Preliminary evaluation of possible confounding factors
may also be necessary. A pilot study is often the only way of providing this information where
the proposed research breaks new ground. Further, a pilot study may also be helpful in
testing and justifying proposed inclusion and exclusion criteria for study subjects, and in
providing evidence that proposed recruitment rates are realistic in relation to the proposed
time frame of the investigation.
Statistical power and the calculation of required sample size Does the observed difference
between two groups being compared reflect a real difference between them or is it merely a
reflection of chance sampling effects? Statistical tests enable calculation of the probability (P)
that a difference as large or larger than observed would arise from random sampling effects
alone. If sampling effects would account for differences of the observed magnitude only
rarely, we may judge it unlikely that chance alone accounts for the difference and conclude
that other systematic factors are involved. But just when do we regard the differences as
sufficiently likely to involve factors other than chance sampling effects that we call them
'statistically significant'? Where we set the cut-off between 'significant' and 'non-significant' is
entirely arbitrary. We can set the significance level, denoted as alpha, to any P-value that we
consider appropriate for a particular situation. However, by common usage to the point of it
having become conventional in biomedical studies, the critical threshold value of P is
generally set at 0.05, i.e. alpha = 0.05. At this threshold, if there is 1 chance in 20 or less
(<5% probability) that random sampling effects could account for a difference at least as
great as that observed, we regard the difference as significant i.e. likely to arise from
systematic causes such as the effects of treatment. Other significance levels, such as alpha
= 0.02 or 0.01 may of course be chosen according to circumstances, but the criterion to be
applied in an investigation should be decided before the analysis is commenced.
Formally the use of probability in this way is based on testing the validity of the null
hypothesis (Ho) that there is no difference between the populations of which the groups
being studied represent random samples in relation to the attributes being compared (mean,
variance, proportion, survival curve). For each null hypothesis an alternative hypothesis (H1)
exists, here that the populations represented by the study samples are in fact different. In the
specific context of risk factor analysis a null hypothesis may be phrased to state that there is
no association between the dependent variable (risk factor) and the independent variable
(injury).
Once the null hypothesis has been formulated and an appropriate statistical test has been
selected (Fig. 5), P can be calculated and statistical significance judged according to where
alpha was set.

Statistical errors of two kinds, known as type I and type II errors, relate to the null hypothesis
as follows:
  Null hypothesis accepted  
Null hypothesis rejected
True Correct Type I error
False Type II error Correct
A type I error arises if the null hypothesis is rejected (because the calculated value of P is
less than alpha) even though the null hypothesis is in fact true. A type I error is equivalent to
the mistake made if a verdict of guilty is brought in when the accused is innocent. The
probability of making a Type I error is alpha which was set by the investigator. the lower
alpha is set, the fewer the type I errors, but the higher the chance of type II errors. A type II
error arises when the null hypothesis is accepted even though it is false i.e. the alternative
hypothesis (H1) is true. A type II error is equivalent to the mistake made by a verdict of not
guilty when the accused is in fact guilty. The probability (beta) of making a type II error
depends on the size of the difference specified by the alternative hypothesis (H1), and this
reflected a decision on the part of the investigator as to the minimum difference regarded as
of practical value or clinical significance. Simultaneously reducing the chances of making
type I and II errors means increasing sample size. Whether this is feasible will depend on
practical considerations (e.g. recruitment rates) and cost/benefit considerations.

Statistical power and the calculation of sample size The power of a statistical test is defined
as (1- beta) where beta is the probability of making a type II error (see above). Statistical
power is the probability of finding a significant difference when the difference between the
populations sampled is delta. The larger the sample size the greater the power of the test.
Methods for calculating sample size (4,5) appropriate for studies of different kinds are
provided in most statistical packages (see below). Remember to include an adequate
allowance for likely drop-outs during the study. Long-term studies are particularly susceptible
to drop-out losses and poor follow-up rates (few have been successfully completed in the
field of sports injuries).

Confidence intervals In many circumstances the arbitrary decision significant/not significant


may advantageously be replaced or supplemented by specifying confidence intervals (CI, set
to any level, but typically 95% ) within which population values or differences estimated from
the study sample(s) must lie ( 5).

Outliers Once data is collected, occasional values may be seen to fall outside the range of
the main body of data points. These must all be accounted for and must not be arbitrarily
discarded as erroneous without investigation. Some will turn out to be due to clerical or
instrumental (e.g. calibration shift) errors, unrecognized pathology in the subject, missed
exclusion criteria etc. Residual exceptions for which no explanation is found at the time may
in future yield new insights.
What statistical tests should be used for the analysis? This depends on the objectives of the
study and on the kind of data, whether measures of variables with underlying Gaussian
(normal) distributions, rank or score data, binomial (two outcome) data or survival curves
(Fig. 5) (5,6,7). For large scale or complex investigations it is generally prudent to recruit
professional statistical collaboration at the planning stage of the project. it may be too late to
achieve the full potential of an investigation if this is delayed until after data collection.
Implementation of data analysis will typically involve use of a computer software package.
Amongst the more widely used professional level packages for independent desk-top use
are:
Package Company
SYSTAT V Systat Inc, Evanston, IL, USA
STATISTICA
SPSS/PC+ SPSS Inc, Chicago, IL, USA
SAS SAS Institute Inc, Cary, NC, USA
MINITAB Minitab Inc, State College, PA, USA
STATGRAPHICS Plus STSC International Ltd., Windsor,

Berks, UK
Authoritative but less comprehensive (and less expensive) although adequate for many
   
smaller scale projects are:

Package Company
INSTAT and PRISM GraphPad Software Inc, San Diego,

CA, USA
STATVIEW Abacus Concepts Inc

     

What conclusions should be drawn from the study? If the research project was properly
planned with clearly formulated objectives based on well-stated hypotheses, the analysis will
necessarily provide the basis for statistical (mathematical) findings. But statistical
significance does not carry an inference of clinical importance (an observed effect in the real
world). Some statistical findings are medically meaningless and sometimes statistics may not
detect an important relationship from a given data set (chance, statistical power too low).
In summary:
• Shape the investigation by formulation of one or more testable hypotheses
• Estimate the necessary sample sizes; allow for drop-outs; check that proposed
recruitment rates are realistic in relation to the available subject pool
• Obtain necessary ethics approvals and informed consent of parties and individuals
concerned
• Use eligible subjects; apply inclusion and exclusion criteria rigorously
• Collect accurate information
• Watch for, and avoid, bias
• Eliminate or control confounding variables
• Practice good management procedures through all phases of the study
• Evaluate data (examine outliers, check for breakdown of recruitment or blinding criteria)
• Choose statistical tools appropriate in relation to data type and study objectives
• Draw conclusions cautiously, with emphasis on medical (rather than mathematical)
importance.

  Example: trends in skiing injuries in Australia

Downhill snow skiing injuries lend themselves to epidemiological study; large numbers of
injuries in one place over a short time period make it possible to collect many data quickly.
Trends in skiing injury type and rates in Australia were established from a review of 22261
injuries over 27 years (8).

Type of study Observational, care series, retrospective


Definition of injury Injury requiring medical attention
Rate of injury 3.22 injuries per 1000 skier days in

(1988)
Risk of injury Not relevant here
Size of sample All injuries within acceptance criteria

for duration of study


Analyses A computer-based statistics package was

used to generate descriptive statistics

and for characterization of changes in

the pattern and rates of injury over the

study period. F statistics were used for

the overall injury rate analysis as this

data exhibited non-Gaussian

distribution. Injury classes (upper body,

thumb, knee, tibial fracture, ankle

fracture, lacerations) were examined by

contingency analysis. The threshold for

significance was set at alpha = 0.5.

Statistically significant trends were


demonstrated in four of the six injury

types.
Conclusions Data on rate and type of injury were

presented. Recommendations were

made in regard to upper body

protection, thumb safety, binding

function, organization of ski slopes and

instruction.
Overview of sports injury rates

There is little doubt that sporting injuries are on the increase and being seen more frequently
in emergency rooms around the world. In Sweden (where comprehensive injury statistics are
collected by national registration) the proportion of patients presenting with sports trauma
rose from 1.4% of all injuries in 1955 to 10% in 1988 (9). the number of participants has
increased with 25 to 30% of the population of most Western countries involved in sport (over
100 million in North America).

In general the cause of injuries can be grouped into personal factors (age, gender,
experience - the last often not a major contributor), sports factors (contact, high-velocity,
indoor, activities with high jump rates are more dangerous) and environmental factors
(usually greater in good weather due to increased exposure) (10).

The relative risks of injury in different sports is reasonably well documented. Insurance data
from Sweden (Folksam Insurance Survey covering 27000 injuries from 1976 to 1983) (11)
showed that ice-hockey had the highest injury rate with basketball the lowest, with many only
minor. The Swiss organization ‘Youth and Sports’ (12) involves 350000 young people
annually: ice-hockey, handball and soccer had the highest injury rates with athletics and
gymnastics the lowest (Fig. 6). Overall females were at greater risk of injury. Studies of
extensive series of injuries reveal that most are minor (65% are contusions or sprains and
contusions) and the overall incidence of injury is low (0.8%) (13). The lower extremity (foot
and ankle) is the most common site of injury (14).

MacLeod has reviewed rugby football injuries. Of these, 30% affected face and neck, 20-
35% were severe (necessitated missing 2 weeks play), 5-22% suffered concussion (not
graded, higher in school boys), catastrophic injury (quadriplegia) occurred sporadically (true
incidence not known)(15). Football injuries in the USA have also been closely studied (16).
Some 50% of all National Football League (NFL) players are injured each season; 5% of the
injuries involve concussion.
Severity of injury is more important than incidence in relation to permanent disability, spinal
injury or death. Sherry found those most at risk of severe skiing injury to be males, children
(<14 years), experienced skiers, in collisions, using steep slopes, at high speed, on slushy or
deep snow, using long skis, without head protection and without operative binding releases.
Severe injuries (<5%) were considered to be life or limb threatening and to require hospital
care or cause long-term morbidity(17).

The Swedish Folksam study (11) found a 2.5% permanent disability from soccer, mainly
involving MVA en route to/from games, or knee injuries (an element of over-reporting was
noted).
The relative risks of spinal injuries are outlined in Fig. 7. Diving shows the highest incidence
with downhill snow skiing (including specialist aerial manoeuvres) surprisingly low (see
Chapter 7). The incidence of spinal cord injury in Rugby Union in NSW (Australia) is 0.53 per
10000 registered participants per year (0.18 for Rugby League) (18).

Rule changes help. A dramatic reduction in lethal cervical spine injuries in American football
occurred between 1975 and 1984 after law changes in 1976 outlawed contact with
helmet/face mask (19). The likelihood of death from sport is important. Downhill snow skiing,
for instance, has 0.87 fatalities per 1 million skier days (subdivided into traumatic 0.24,
cardiovascular 0.45, environmental/hypothermia 0.18) (20). Other sports have significantly
higher death rates; for example water sports 2.8, firearm sports 1.3, football 2.0 per 100000
respectively (21). There is a greater likelihood of dying from car accidents to/from sporting
events than from participation (in one ski season in the Perisher region, Australia, 12 car
deaths occurred whilst there were 29 ski deaths over the previous 32 years).

The cost of injuries from sport has been estimated in Sweden to amount on average to
US$330 per case when treated in the emergency room, but US $4178 if admitted. Note that
the cost to society of treating osteoporosis at this time was 100 times the total cost of
sporting injuries.
References

1. M Schootman et al. Statistics in sports injury research. In JC DeLee D Drez Eds


Orthopaedic Sports Medicine. Saunders p 160-183.

2. J Ryan, A Pearl 1994 Epidemiological concepts in sports medicine. A brief review.


Update in Sports Medicine AAOS p 3-16.

3. RL Lieber 1994 Experimental design and statistical analysis. In SR Simon Rd


Orthopaedic Basic Science p 626-659.

4. DG Altman 1991 Practical Statistics for Medical Research. London, Chapman and
Hall.
5. HA Kahn, CT Sempos 1989 Statistical Methods in Epidemiology. New York, OUP

6. WL Hays 1988 Statistics. 4 Ed. Orlando, Harcourt Brace Jovanovich.

7. H Motulsky 1995 Intuitive Biostatistics. New York, OUP.

8. E Sherry, L Fenelon 1991 Trends in skiing injury type and rates in Australia MJA 155
p 513-515.

9. E Ericksson 1994 An introduction and brief review. Oxford Textbook of Sports


Medicine.OUP. p. ooo

10. FJG Backx et al. 1991 Injuries in high-risk persons and high risk sports. AMJ Sp.
Med.P 124-130.

11. Folksam Sports Injuries, 1976-1983. A report from Folksam, Stockholm 1985.

12. M de Loes 1995 Epidemiology of sports injuries in the Swiss Organization ‘Youth
and Sports’, 1987-1989. Int J Sports Med 16 p 134-138.

13. ER Larkowski et al. 1995 Medical coverage for multievent sports competition. Mayo
Clin Proc 70 p 549-555.

14. EM Tenvergert et al. 1992 J Sp Med Phys Fitness 32 (2) p 214-220.

15. DAD MacLeod 1993 in GR McLatchie, CME Lennox. The Soft Tissues. Trauma and
Sports. Butterworth p 372-381.

16. KN Waninger JA Lombardo 1993 in Sports Medicine Secrets. MB Mellion Ed.


Hanley and Belfus p.343

17. E Sherry 1986 Factors determining the severity of skiing trauma. MPH Thesis, Univ
Sydney.

18. SF Wilson et al. 1996 Spinal cord injuries have fallen in Rugby Union players in
NSW. BMJ 313,1550.

19. JS Torg B Sennet 1987 Clinics in Sports Medicine 6 61-72.

20. E Sherry L Clout 1988 Death from skiing in Australia. MJA 149 p615-618.

21. JE Sheahy 1983 Death in downhill snow skiing. Ski Trauma and Safety. ASTM
Philadelphia p 349-357.

You might also like