Professional Documents
Culture Documents
This is the sixth article in a series from the Arizona State University College of Nursing and Health Innovation’s Center
for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the
delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and
patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the
highest quality of care and best patient outcomes can be achieved.
The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one
step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward
implementing EBP at your institution. Also, we’ve scheduled “Chat with the Authors” calls every few months to provide
a direct line to the experts to help you resolve questions. Details about how to participate in the next call will be pub-
lished with November’s Evidence-Based Practice, Step by Step.
I
n July’s evidence-based prac- RAPID CRITICAL APPRAISAL suggests that they consult their
tice (EBP) article, Rebecca Carlos explains that typically an glossary when they see an unfa-
R., our hypothetical staff RCA is conducted along with an miliar word. For example, the
nurse, Carlos A., her hospital’s RCA checklist that’s specific to term randomization, or random
expert EBP mentor, and Chen the research design of the study assignment, is a relevant feature
M., Rebecca’s nurse colleague, being evaluated—and before any of research methodology for in
collected the evidence to an data are entered into an evalua- tervention studies that may be
swer their clinical question: “In tion table. However, since Rebecca unfamiliar. Using the glossary, he
hospitalized adults (P), how and Chen are new to appraising explains that random assignment
does a rapid response team studies, he felt it would be easier and random sampling are often
(I) compared with no rapid for them to first enter the essen- confused with one another, but
response team (C) affect the tials into the table and then eval that they’re very different. When
number of cardiac arrests (O) uate each study. Carlos shows researchers select subjects from
and unplanned admissions to Rebecca several RCA checklists within a certain population to
the ICU (O) during a three- and explains that all checklists participate in a study by using a
month period (T)?” As part of have three major questions in random strategy, such as tossing
their rapid critical appraisal common, each of which contains a coin, this is random sampling.
(RCA) of the 15 potential other more specific subquestions It allows the entire population
“keeper” studies, the EBP team about what constitutes a well- to be fairly represented. But
found and placed the essential conducted study for the research because it requires access to a
elements of each study (such as design under review (see Example particular population, random
its population, study design, of a Rapid Critical Appraisal sampling is not always feasible.
and setting) into an evaluation Checklist). Carlos adds that many health
table. In so doing, they began Although the EBP team will care studies are based on a con-
to see similarities and differ be looking at how well the re venience sample—participants
ences between the studies, searchers conducted their studies recruited from a readily available
which Carlos told them is the and discussing what makes a population, such as a researcher’s
beginning of synthesis. We now “good” research study, Carlos affiliated hospital, which may or
join the team as they continue reminds them that the goal of may not represent the desired
with their RCA of these studies critical appraisal is to determine population. Random assignment,
to determine their worth to the worth of a study to practice, on the other hand, is the use of a
practice. not solely to find flaws. He also random strategy to assign study
ajn@wolterskluwer.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 41
participants to the intervention with him, Rebecca and Chen are the same as three of their
or control group. Random as- find the checklist for systematic potential “keeper” studies. They
signment is an important feature reviews. wonder whether they should keep
of higher-level studies in the hier- As they start to rapidly criti- those studies in the pile, or if, as
archy of evidence. cally appraise the meta-analysis, duplicates, they’re unnecessary.
Carlos also reminds the team they discuss that it seems to be Carlos says that because the meta-
that it’s important to begin the biased since the authors included analysis only included studies
RCA with the studies at the high- only studies with a control group. with control groups, it’s impor-
est level of evidence in order to see Carlos explains that while hav- tant to keep these three studies so
the most reliable evidence first. In ing a control group in a study is that they can be compared with
their pile of studies, these are the ideal, in the real world most stud- other studies in the pile that don’t
three systematic reviews, includ- ies are lower-level evidence and have control groups. Rebecca
ing the meta-analysis and the don’t have control or compari notes that more than half of their
Cochrane review, they retrieved son groups. He emphasizes that, 15 studies don’t have control or
from their database search (see in eliminating lower-level studies, comparison groups. They agree
“Searching for the Evidence,” the meta-analysis lacks evidence as a team to include all 15 stud
and “Critical Appraisal of the that may be informative to the ies at all levels of evidence and go
Evidence: Part I,” Evidence- question. Rebecca and Chen— on to appraise the two remaining
Based Practice, Step by Step, who are clearly growing in their systematic reviews.
May and July). Among the RCA appraisal skills—also realize that The MERIT trial1 is next in
checklists Carlos has brought three studies in the meta-analysis the EBP team’s stack of studies.
ajn@wolterskluwer.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 43
Rapid Critical Appraisal of the MERIT Study
1. Are the results of the study valid?
A. Were the subjects randomly assigned to the intervention and control groups? Yes No Unknown
Random assignment of hospitals was made to either a rapid response team (RRT; intervention) group or no RRT (con-
trol) group. To protect against introducing further bias into the study, hospitals, not individual patients, were randomly
assigned to the intervention. If patients were the study subjects, word of the RRT might have gotten around, potentially
influencing the outcome.
B. Was random assignment concealed from the individuals enrolling the subjects? Yes No Unknown
An independent statistician randomly assigned hospitals to the RRT or no RRT group after baseline data had been
collected; thus the assignments were concealed from both researchers and participants.
C. Were the subjects and providers blind to the study group? Yes No Unknown
Hospitals knew to which group they’d been assigned, as the intervention hospitals had to put the RRTs into practice.
Management, ethics review boards, and code committees in both hospitals knew about the intervention. The control
hospitals had code teams and some already had systems in place to manage unstable patients. But control hospitals
didn’t have a placebo strategy to match the intervention hospitals’ educational strategy for how to implement an RRT
(a red flag for confounding!). If you worked in one of the control hospitals, unless you were a member of one of the
groups that gave approval, you wouldn’t have known your hospital was participating in a study on RRTs; this lessens
the chance of confounding variables influencing the outcomes.
D. Were reasons given to explain why subjects didn’t complete the study? Yes No Not Applicable
This question is not applicable as no hospitals dropped out of the study.
E. W
ere the follow-up assessments long enough to fully study the effects of the
intervention? Yes No Unknown
The intervention was conducted for six months, which should be adequate time to have an impact on the outcomes of car-
diopulmonary arrest rates (CR), hospital-wide mortality rates (HMR), and unplanned ICU admissions (UICUA). However,
the authors remark that it can take longer for an RRT to affect mortality, and cite trauma protocols that took up to 10 years.
F. Were the subjects analyzed in the group to which they were randomly assigned? Yes No Unknown
All 23 (12 intervention and 11 control) hospitals remained in their groups, and analysis was conducted on an intention-
to-treat basis. However, in their discussion, the authors attempt to provide a reason for the disappointing study results;
they suggest that because the intervention group was “inadequately implemented,” the fidelity of the intervention was
compromised, leading to less than reliable results. Another possible explanation involves the baseline quality of care; if
high, the improvement after an RRT may have been less than remarkable. The authors also note a historical confounder:
in Australia, where the study took place, there was a nationwide increase in awareness of patient safety issues.
H. Were the instruments used to measure the outcomes valid and reliable? Yes No Unknown
The primary outcome was the composite of HMR (that is, unexpected deaths, excluding do not resuscitates [DNRs]),
CR (that is, no palpable pulse, excluding DNRs), and UICUA (any unscheduled admissions to the ICU).
D. What are my patients’ and their families’ values and expectations for the outcome and the
treatment itself?
We will keep this in mind as we consider the body of evidence.
ajn@wolterskluwer.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 45
Were the subjects analyzed in Were the instruments used to statistics. He says that he didn’t
the group to which they were measure the outcomes valid and take courses in graduate school
randomly assigned? Rebecca reliable? The overall measure in on conducting statistical analysis;
sees the term intention-to-treat the MERIT study is the compos- rather, he learned about different
analysis in the study and says that ite of the individual outcomes: statistical tests in courses that re-
it sounds like statistical language. CR, HMR, and unplanned ad quired students to look up how
Carlos confirms that it is; it means missions to the ICU (UICUA). to interpret a statistic whenever
that the researchers kept the hos- These parameters were defined they encountered it in the articles
pitals in their assigned groups reasonably and didn’t include do they were reading. Thus he had a
when they conducted the analysis, not resuscitate (DNR) cases. Car- context for how the statistic was
a technique intended to reduce los explains that since DNR cases being used and interpreted, what
possible bias. Even though the are more likely to code or die, in- question the statistical analysis
MERIT study used this technique, cluding them in the HMR and was answering, and what kind of
Carlos notes that in the discussion CR would artificially increase data were being analyzed. He also
section the authors offer some these outcomes and introduce learned to use a search engine,
important caveats about how the bias into the findings. such as Google.com, to find an
study was conducted, including As the team moves through explanation for any statistical
poor intervention implementation, the questions in the RCA check- tests with which he was unfamil-
which may have contributed to list, Rebecca wonders how she iar. Because his goal was to un-
MERIT’s unexpected findings.1 and Chen would manage this derstand what the statistic meant
Was the control group appro- kind of appraisal on their own. clinically, he looked for simple
priate? Carlos explains that it’s Carlos assures them that they’ll Web sites with that same focus
challenging to establish an ap get better at recognizing well- and avoided those with Greek
propriate comparison or control conducted research the more symbols or extensive formulas
group without an understanding RCAs they do. Though Rebecca that were mostly concerned with
of how the intervention will be feels less than confident, she appre- conducting statistical analysis.
implemented. In this case, it may ciates his encouragement nonethe- How large is the intervention
be problematic that the interven- less, and chooses to lead the team or treatment effect? As the team
tion group received education in discussion of the next question. goes through the studies in their
and training in implementing the Were the demographics and RCA, they decide to construct a
RRT and the control group re- baseline clinical variables of the list of statistics terminology for
ceived no comparable placebo subjects in each of the groups quick reference (see A Sampling of
(meaning education and training similar? Rebecca says that the Statistics). The major statistic used
about something else). But Car intervention group and the con- in the MERIT study is the odds
los reminds the team that the re- trol or comparison group need to ratio (OR). The OR is used to
searchers attempted to control be similar at the beginning of any provide insight into the measure
for known confounding variables intervention study because any of association between an inter-
by stratifying the sample on char- differences in the groups could vention and an outcome. In the
acteristics such as academic versus influence the outcome, poten- MERIT study, the control group
nonacademic hospitals, bed size, tially increasing the risk that the did better than the intervention
and other important parameters. outcome might be unrelated to the group, which is contrary to what
This method helps to ensure intervention. She refers the team was expected. Rebecca notes that
equal representation of these pa- to their earlier discussion about the researchers discussed the pos-
rameters in both the intervention confounding variables. Carlos sible reasons for this finding in the
and control groups. However, a tells Rebecca that her explana- final section of the study. Carlos
major concern for clinicians con- tion was excellent. Chen remarks says that the authors’ discussion
sidering whether to use the that Rebecca’s focus on learning about why their findings occurred
MERIT findings in their decision appears to be paying off. is as important as the findings
making involves the control hos- themselves. In this study, the
pitals’ code teams and how they WHAT ARE THE RESULTS? discussion communicates to any
may have functioned as RRTs, As the team moves on to the sec- clinicians considering initiating
which introduces a potential con- ond major question, Carlos tells an RRT in their hospital that they
founder into the study that could them that many clinicians are should assess whether the current
possibly invalidate the findings. apprehensive about interpreting code team is already functioning
Odds Ratio The odds of an • If an OR is equal to 1, then the The OR for hospital-wide mor- From the HMR OR data
(OR) outcome occurring intervention didn’t make a differ- tality rates (HMR) in the MERIT alone, a clinician may not
in the intervention ence. study was 1.03 (95% CI, feel confident that a rapid
group compared • Interpretation depends on the out- 0.84 – 1.28). The odds of response team (RRT) is the
with the odds of come. HMR in the intervention group best intervention to reduce
it occurring in the • If the outcome is good (for exam- were about the same as HMR HMR but may seek out other
comparison or ple, fall prevention), the OR is pre- in the comparison group. evidence before making a
control group. ferred to be above 1. decision.
• If the outcome is bad (for example,
mortality rate), the OR is preferred
to be below 1.
Relative Risk The risk of an out- • If an RR is equal to 1, then the The RR of cardiopulmonary ar The RRT significantly reduced
(RR) come occurring intervention didn’t make a differ- rest in adults was reported in the RR of cardiopulmonary
in the intervention ence. the Chan PS, et al., 2010 sys- arrest in this study. From
group compared • Interpretation depends on the out- tematic reviewa as 0.66 (95% these data, clinicians can be
with the risk of it come. CI, 0.54 – 0.80), which is sta- reasonably confident that ini-
occurring in the • If the outcome is good (for example tistically significant because tiating an RRT will reduce CR
comparison or fall prevention), the RR is preferred there’s no 1.0 in the CI. in hospitalized adults.
control group. to be above 1. Thus, the RR of cardiopulmo-
• If the outcome is bad (for example, nary arrest occurring in the
mortality rate), the RR is preferred intervention group compared
to be below 1. with the RR of it occurring in the
control group is 0.66, or less
than 1. Since cardiopulmonary
arrest is not a good outcome,
this is a desirable finding.
Confidence The range in •C I provides the precision of the See the two previous examples. In the Chan PS, et al., 2010
Interval (CI) which clinicians study finding: a 95% CI indicates systematic review,a the CI is a
can expect to get that clinicians can be 95% con- close range around the study
results if they pres- fident that their findings will be finding and is statistically
ent the interven- within the range given in the study. significant. Clinicians can be
tion as it was in •C I should be narrow around the 95% confident that if they
the study. study finding, not wide. conduct the same interven-
• If a CI contains the number that tion, they’ll have a result simi-
indicates no effect (for OR it’s 1; for lar to that of the study (that is,
effect size it’s 0), the study finding a reduction in risk of cardio-
is not statistically significant. pulmonary arrest) within the
range of the CI, 0.54 – 0.80.
The narrower the CI range,
the more confident clinicians
can be that, using the same
intervention, their results will
be close to the study findings.
Mean (X) Average aveat: Averaging captures only
•C In the Dacey M J , et al., 2007 Introducing an RRT decreased
those subjects who surround a study,a before the RRT the aver- the average CR by more than
central tendency, missing those age (mean) CR was 7.6 per 50% (7.6 to 3 per 1,000
who may be unique. For example, 1,000 discharges per month; discharges per month).
the mean (average) hair color in a after the RRT, it decreased to
classroom of schoolchildren cap- 3 per 1,000 discharges per
tures those with the predominant month.
hair color. Children with hair color
different from the predominant hair
color aren’t captured and are con-
sidered outliers (those who don’t
converge around the mean).
a
For study details on Chan PS, et al., and Dacey MJ, et al., go to http://links.lww.com/AJN/A11.