Professional Documents
Culture Documents
Engineering concepts
Construction
http://www.strongtie.com/graphics/http://www.strongtie.com/graphics/anchorsystem
s/catalog/tables/228c2http://www.strongtie.com/graphics/anchorsystems/catalog/tables/228c-
2012.gif012.gifanchorsyst
ems/catalog/tables/228c-2012.gif
technology tested in the low-risk group, leading to the perception that the detection system was more
sensitive than a system tested in lower risk patients.
Unclear Definition of Study Endpoints. Medical technologies can be assessed a multiple levels,
depending upon whether they are diagnostic or therapeutic. The most basic level at which a diagnostic
technology can be assessed is definition of its performance characteristicssensitivity and specificity.
Even this basic level of assessment is not easy to perform. It requires comparison of the performance of
the new technology with that of a gold standard. And true gold standards (such as tissue obtained during
surgery) are not always available.
Bias. The confidence that you can have that the results of using a technology described in a study are the
same results you would get if you used the technology in a similar fashion depends on the absence of
bias. Bias is systematic sources of variation that distort the results of a study in one direction or another.
There are many types of bias that have been described including those that are especially problematic in
cancer screening. The most common general sources of bias in clinical trials are:
Confounding. A confounding variable is one that falsely obscures or accentuates the relationship between
two factors, such as the effect of a treatment on patient outcome. Confounding occurs when a factor other
than the interventions being compared is not distributed equally in the study groups being assessed and
affects the outcome of interest.
Systematic Errors or Differences in Measurement. Selection bias can occur inadvertently if there are
systematic errors or differences in the way particular patient characteristics (e.g., eligibility criteria) are
measured or in the way a determination is made of the intervention to which a patient was exposed. (The
latter could be a problem when exposure to the intervention is ascertained from insurance claims data,
which may or may not be comprehensive or accurate.) The most common sources of bias due to
measurement error, however, arise in evaluation of the outcomes of patients in two arms of a study.
Ascertainment of patient outcomes by an unblinded' investigator who knows what intervention each
patient received poses a serious risk of bias. An unblinded investigator, for example, may interpret
particular findings differently, or look for particular findings with varying efforts, if she or he has
preconceived notions about the comparative effects of the two technologies under study. Finally, although
it may seem obvious that measurements of the outcomes of patients in two arms of a study should be
performed in an identical manner and at the same point in time (relative to the interventions under study),
this important aspect of study design is not always followed.
Loss of Patients to Follow-Up. Anyone who has conducted an observational study knows how difficult it
is to follow patients over time. Loss of patients to follow-up becomes a threat to the internal validity of a
study when it occurs in a substantial proportion of patients and at differential rates in the various arms of
a study. Failure to account for all patients who were initially enrolled in a study is particularly
problematic. In one study submitted to the Food and Drug Administration (FDA), for example, data on
patients who had received a new device were reported only for those patients who were followed for at
least one year. Many patients dropped out of the study prior to the one-year endpoint, however, due either
to side effects of to ineffectiveness of the device. Consequently, the results reported to the FDA
exaggerated the effectiveness and tolerability of the device. All enrolled patients thus must be accounted
for. If some patients withdraw or are lost to follow-up, the number of withdrawals and losses in each arm
should be reported with specification of the reasons for withdrawal.
Inappropriate Statistical Analysis and Planning. On occasion, statistical analyses reported in published
studies are not performed correctly, or the most appropriate statistical analyses are not performed. In other
instances, statistical issues, such as statistical power to detect a difference between two arms of a study if
one really existed, do not seem to have been adequately considered in planning the study, or had to be
compromised for practical reasons (such as study cost or patient availability). As a result, the results
reported in some studies are misleading and have a significant probability of being wrong. Investigators
should report the statistical significance of their results, and provide 95 percent confidence intervals
around group differences or main effects. In addition, if relative risks or odds ratios are reported (such as
reporting that a particular outcome is twice as likely to occur with treatment A as with treatment B), the
absolute rate with which the outcome occurs also should be reported.
Poorly Described Techniques. Diagnostic and therapeutic techniques are often employed using very
specific protocols or techniques that affect the effectiveness or safety of the interventions. For example,
different pulse sequences can be used in magnetic resonance imaging studies and different software might
base comparisons of digitized mammography images on different calculations. Unless the technology
under study, and the technologies to which it is being compared, are clearly described, it is not possible to
meaningfully compare the results of one study to those of other studies of what appears to be the same
technology. Without such descriptions it also may be difficult or even impossible to judge the relevance of
the study results.
Footnotes