You are on page 1of 8

By Ellen Fineout-Overholt, PhD, RN,

FNAP, FAAN, Bernadette Mazurek


Melnyk, PhD, RN, CPNP/PMHNP,
FNAP, FAAN, Susan B. Stillwell,
DNP, RN, CNE, and Kathleen M.
Williamson, PhD, RN

Critical Appraisal of the Evidence: Part II


Digging deeper—examining the “keeper” studies.

This is the sixth article in a series from the Arizona State University College of Nursing and Health Innovation’s Center
for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the
delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and
patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the
highest quality of care and best patient outcomes can be achieved.
The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one
step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward
implementing EBP at your institution. Also, we’ve scheduled “Chat with the Authors” calls every few months to provide
a direct line to the experts to help you resolve questions. Details about how to participate in the next call will be pub-
lished with November’s Evidence-Based Practice, Step by Step.

I
n July’s evidence-based prac- RAPID CRITICAL APPRAISAL suggests that they consult their
tice (EBP) article, Rebecca Carlos explains that typically an glossary when they see an unfa-
R., our hypothetical staff RCA is conducted along with an miliar word. For example, the
nurse, Carlos A., her hospital’s RCA checklist that’s specific to term randomization, or random
expert EBP mentor, and Chen the research design of the study assignment, is a relevant feature
M., Rebecca’s nurse colleague, being evaluated—and before any of research methodology for in­
col­lected the evidence to an­ data are entered into an evalua- tervention studies that may be
swer their clinical question: “In tion table. However, since Rebecca unfamiliar. Using the glossary, he
­hospitalized adults (P), how and Chen are new to appraising explains that random assignment
does a rapid response team studies, he felt it would be easier and random sampling are often
(I) compared with no rapid for them to first enter the essen- confused with one another, but
­response team (C) affect the tials into the table and then eval­ that they’re very different. When
number of cardiac arrests (O) uate each study. Carlos shows researchers select subjects from
and unplanned ­admissions to Rebecca several RCA checklists within a certain population to
the ICU (O) during a three- and explains that all checklists participate in a study by using a
month period (T)?” As part of have three major questions in random strategy, such as tossing
their rapid ­critical appraisal common, each of which contains a coin, this is random sampling.
(RCA) of the 15 potential other more specific subquestions It allows the entire population
“keeper” studies, the EBP team about what constitutes a well- to be fairly represented. But
found and placed the essential conducted study for the research because it requires access to a
elements of each study (such as design under review (see Example particular population, random
its population, study design, of a Rapid Critical Appraisal sampling is not always feasible.
and setting) into an evaluation Checklist). Carlos adds that many health
table. In so doing, they began Although the EBP team will care studies are based on a con-
to see similarities and differ­ be looking at how well the re­­ venience sample—participants
ences between the studies, searchers conducted their studies recruited from a readily available
which Carlos told them is the and discussing what makes a population, such as a researcher’s
beginning of synthesis. We now “good” research study, Carlos affiliated hospital, which may or
join the team as they continue reminds them that the goal of may not represent the desired
with their RCA of these studies critical appraisal is to determine population. Random assignment,
to determine their worth to the worth of a study to practice, on the other hand, is the use of a
practice. not solely to find flaws. He also random strategy to assign study


ajn@wolterskluwer.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 41
participants to the intervention with him, Rebecca and Chen are the same as three of their
or control group. Random as- find the checklist for systematic po­tential “keeper” studies. They
signment is an important feature reviews. wonder whether they should keep
of higher-level studies in the hier- As they start to rapidly criti- those studies in the pile, or if, as
archy of evidence. cally appraise the meta-analysis, duplicates, they’re unnecessary.
Carlos also reminds the team they discuss that it seems to be Carlos says that because the meta-
that it’s important to begin the biased since the authors included analysis only included studies
RCA with the studies at the high- only studies with a control group. with control groups, it’s impor-
est level of evidence in order to see Carlos explains that while hav- tant to keep these three studies so
the most reliable evidence first. In ing a control group in a study is that they can be compared with
their pile of studies, these are the ideal, in the real world most stud- other studies in the pile that don’t
three systematic reviews, includ- ies are lower-level evidence and have control groups. Rebecca
ing the meta-analysis and the don’t have control or compari­ notes that more than half of their
Cochrane review, they retrieved son groups. He emphasizes that, 15 studies don’t have control or
from their database search (see in eliminating lower-level studies, comparison groups. They agree
“Searching for the Evidence,” the meta-analysis lacks evidence as a team to include all 15 stud­
and “Critical Appraisal of the that may be informative to the ies at all levels of evidence and go
Evidence: Part I,” Evidence- question. Rebecca and Chen— on to appraise the two remaining
Based Practice, Step by Step, who are clearly growing in their systematic reviews.
May and July). Among the RCA appraisal skills—also realize that The MERIT trial1 is next in
checklists Carlos has brought three studies in the meta-analysis the EBP team’s stack of studies.

Example of a Rapid Critical Appraisal Checklist


Rapid Critical Appraisal of Systematic Reviews of Clinical Interventions or Treatments
1. Are the results of the review valid?
A. Are the studies in the review randomized controlled trials? Yes No
B. Does the review include a detailed description of the search
strategy used to find the relevant studies? Yes No
C. Does the review describe how the validity of the individual
studies was assessed (such as, methodological quality,
including the use of random assignment to study groups and
complete follow-up of subjects)? Yes No
D. Are the results consistent across studies? Yes No
E. Did the analysis use individual patient data or aggregate data? Patient Aggregate
2. What are the results?
A. How large is the intervention or treatment effect (odds ratio,
relative risk, effect size, level of significance)?
B. How precise is the intervention or treatment (confidence interval)?
3. Will the results assist me in caring for my patients?
A. Are my patients similar to those in the review? Yes No
B. Is it feasible to implement the findings in my practice setting? Yes No
C. Were all clinically important outcomes considered, including
both risks and benefits of the treatment? Yes No
D. What is my clinical assessment of the patient, and are there any
contraindications or circumstances that would keep me from
implementing the treatment? Yes No
E. What are my patients’ and their families’ preferences and
values concerning the treatment? Yes No
© Fineout-Overholt and Melnyk, 2005.

42 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com


As we noted in the last install- that this is an important question the study findings invalid. Carlos
ment of this series, MERIT is a when appraising RCTs. If a study says that a single “no” may or
good study to use to illustrate calls itself an RCT but didn’t may not mean that the study
the different steps of the critical randomly assign participants, findings are invalid. It’s their job
appraisal process. (Readers may then bias could be present. In as clinicians interpreting the data
want to retrieve the article, if appraising the MERIT study, the to weigh each aspect of the study
possible, and follow along with team discusses how the research- design. Therefore, if the answer
the RCA.) Set in Australia, the ers randomly assigned entire to any validity question isn’t
MERIT trial examined whether hospitals, not individual patients, affirmative, they must each ask
the introduction of a rapid re­­ to the RRT intervention and themselves: does this “no” make
sponse team (RRT; called a med­ control groups using a technique the study findings untrustworthy
ical emergency team or MET called cluster randomization. To to the extent that I don’t feel
in the study) would reduce the better understand this method, comfortable using them in my
incidence of cardiac arrest, death, the EBP team looks it up on the practice?
and unplanned admissions to Internet and finds a PowerPoint Were reasons given to
the ICU in the hospitals studied. presentation by a World Health explain why subjects didn’t
To follow along as the EBP team Organization researcher that complete the study? Carlos
addresses each of the essential explains it in simplified terms: explains that sometimes par-
elements of a well-conducted “Cluster randomized trials are ticipants leave a study before the
randomized controlled trial (RCT) experiments in which social units end (something about the study
and how they apply to the MERIT or clusters [in our case, hospitals] or the participants themselves
study, see their notes in Rapid rather than individuals are ran- may prompt them to leave). If
Critical Appraisal of the MERIT domly allocated to intervention all or many of the participants
Study. groups.”2 leave for the same reason, this
Was random assignment may lead to biased findings.
ARE THE RESULTS OF THE STUDY VALID? concealed from the individuals Therefore, it’s important to look
The first section of every RCA enrolling the subjects? Conceal- for an explanation for why any
checklist addresses the validity ment helps researchers reduce subjects didn’t complete a study.
of the study at hand—did the potential bias, preventing the Since no hospitals dropped out
researchers use sound scientific person(s) enrolling participants of the MERIT study, this ques-
methods to obtain their study from recruiting them into a study tion is determined to be not
results? Rebecca asks why valid- with enthusiasm if they’re des- applicable.
ity is so important. Carlos replies tined for the intervention group Were the follow-up assess-
that if the study’s conclusion can or with obvious indifference if ments long enough to fully study
be trusted—that is, relied upon they’re intended for the control the effects of the intervention?
to inform practice—the study or comparison group. The EBP Chen asks Carlos why a time
must be conducted in a way that team sees that the MERIT trial frame would be important in
reduces bias or eliminates con- used an independent statistician studying validity. He explains
founding variables (factors that to conduct the random assign- that researchers must ensure that
influence how the intervention ment after participants had the outcome is evaluated for a
affects the outcome). Researchers already been enrolled in the long enough period of time to
typically use rigorous research study, which Carlos says meets show that the intervention indeed
methods to reduce the risk of the criteria for concealment. caused it. The researchers in the
bias. The purpose of the RCA Were the subjects and pro- MERIT study conducted the RRT
checklist is to help the user deter- viders blind to the study group? intervention for six months be­
mine whether or not rigorous Carlos notes that it would be fore evaluating the outcomes. The
methods have been used in the difficult to blind participants team discusses how six months
study under review, with most or researchers to the interven- was likely adequate to determine
questions offering the option of tion group in the MERIT study how the RRT affected cardio­
a quick answer of “yes,” “no,” because the hospitals that were pulmonary arrest rates (CR) but
or “unknown.” to initiate an RRT had to know might have been too short to es-
Were the subjects randomly it was happening. Rebecca and tablish the relationship between
assigned to the intervention and Chen wonder whether their “no” the RRT and hospital-wide mor-
control groups? Carlos explains answer to this question makes tality rates (HMR).


ajn@wolterskluwer.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 43
Rapid Critical Appraisal of the MERIT Study
1. Are the results of the study valid?
A. Were the subjects randomly assigned to the intervention and control groups? Yes No Unknown
Random assignment of hospitals was made to either a rapid response team (RRT; intervention) group or no RRT (con-
trol) group. To protect against introducing further bias into the study, hospitals, not individual patients, were randomly
assigned to the intervention. If patients were the study subjects, word of the RRT might have gotten around, potentially
influencing the outcome.

B. Was random assignment concealed from the individuals enrolling the subjects? Yes No Unknown
An independent statistician randomly assigned hospitals to the RRT or no RRT group after baseline data had been
­collected; thus the assignments were concealed from both researchers and participants.

C. Were the subjects and providers blind to the study group? Yes No Unknown
Hospitals knew to which group they’d been assigned, as the intervention hospitals had to put the RRTs into practice.
Management, ethics review boards, and code committees in both hospitals knew about the intervention. The control
hospitals had code teams and some already had systems in place to manage unstable patients. But control hospitals
didn’t have a placebo strategy to match the intervention hospitals’ educational strategy for how to implement an RRT
(a red flag for confounding!). If you worked in one of the control hospitals, unless you were a member of one of the
groups that gave approval, you wouldn’t have known your hospital was participating in a study on RRTs; this lessens
the chance of confounding variables influencing the outcomes.

D. Were reasons given to explain why subjects didn’t complete the study? Yes No Not Applicable
This question is not applicable as no hospitals dropped out of the study.

E. W
 ere the follow-up assessments long enough to fully study the effects of the
intervention? Yes No Unknown
The intervention was conducted for six months, which should be adequate time to have an impact on the outcomes of car-
diopulmonary arrest rates (CR), hospital-wide mortality rates (HMR), and unplanned ICU admissions (UICUA). However,
the authors remark that it can take longer for an RRT to affect mortality, and cite trauma protocols that took up to 10 years.

F. Were the subjects analyzed in the group to which they were randomly assigned? Yes No Unknown
All 23 (12 intervention and 11 control) hospitals remained in their groups, and analysis was conducted on an intention-
to-treat basis. However, in their discussion, the authors attempt to provide a reason for the disappointing study results;
they suggest that because the intervention group was “inadequately implemented,” the fidelity of the intervention was
compromised, leading to less than reliable results. Another possible explanation involves the baseline quality of care; if
high, the improvement after an RRT may have been less than remarkable. The authors also note a historical confounder:
in Australia, where the study took place, there was a nationwide increase in awareness of patient safety issues.

G. Was the control group appropriate? Yes No Unknown


See notes to question C. Controls had no time built in for education and training as the intervention hospitals did, so
this time wasn’t controlled for, nor was there any known attempt to control the organizational “buzz” that something
was going on. The study also didn’t account for the variance in how RRTs were implemented across hospitals. The
researchers indicate that the existing code teams in control hospitals “did operate as [RRTs] to some extent.” Because of
these factors, the appropriateness of the control group is questionable.

H. Were the instruments used to measure the outcomes valid and reliable? Yes No Unknown
The primary outcome was the composite of HMR (that is, unexpected deaths, excluding do not resuscitates [DNRs]),
CR (that is, no palpable pulse, excluding DNRs), and UICUA (any unscheduled admissions to the ICU).

44 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com


I. W
 ere the demographics and baseline clinical variables of the subjects
in each of the groups similar? Yes No Unknown
The researchers provided a table showing how the RRT and control hospitals compared on several variables. Some
­variability existed, but there were no statistical differences between groups.

2. What are the results?


A. How large is the intervention or treatment effect?
The researchers reported outcome data in various ways, but the bottom line is that the control group did better than
the intervention group. For example, RRT calling criteria were documented more than 15 minutes before an event
by more hospitals in the control group than in the intervention group, which is contrary to expectation. Half the HMR
cases in the intervention group met the criteria compared with 55% in the control group (not statistically significant).
But only 30% of CR cases in the intervention group met the criteria compared with 44% in the control group, which
was statistically significant (P = 0.031). Finally, regarding UICUA, 51% in the intervention group compared with 55%
in the control group met the criteria (not significant). This indicates that the control hospitals were doing a better job of
documenting unstable patients before events occurred than the intervention hospitals.

B. How precise is the intervention or treatment?


The odds ratio (OR) for each of the outcomes was close to 1.0, which indicates that the RRT had no effect in the
­intervention hospitals compared with the control hospitals. Each confidence interval (CI) also included the num­
ber 1.0, which indicates that each OR wasn’t statistically significant (HMR OR = 1.03 (0.84 – 1.28); CR OR =
0.94 (0.79 – 1.13); UICUA OR = 1.04 (0.89 – 1.21). From a clinical point of view, the results aren’t straightfor-
ward. It would have been much simpler had the intervention hospitals and the control hospitals done equally badly;
but the fact that the control hospitals did better than the intervention hospitals raises many questions about the
results.

3. Will the results help me in caring for my patients?


A. Were all clinically important outcomes measured? Yes No Unknown
It would have been helpful to measure cost, since participating hospitals that initiated an RRT didn’t eliminate their code
team. If a hospital has two teams, is the cost doubled? And what’s the return on investment? There’s also no mention of
the benefits of the code team. This is a curious question . . . maybe another PICOT question?

B. What are the risks and benefits of the treatment?


This is the wrong question for an RRT. The appropriate question would be: What is the risk of not adequately introduc-
ing, monitoring, and evaluating the impact of an RRT?

C. Is the treatment feasible in my clinical setting? Yes No Unknown


We have administrative support, once we know what the evidence tells us. Based on this study, we don’t know much
more than we did before, except to be careful about how we approach and evaluate the issue. We need to keep the
following issues, which the MERIT researchers raised in their discussion, in mind: 1) allow adequate time to measure
outcomes; 2) some outcomes may be reliably measured sooner than others; 3) the process of implementing an RRT is
very important to its success.

D. What are my patients’ and their families’ values and expectations for the outcome and the
treatment itself?
We will keep this in mind as we consider the body of evidence.


ajn@wolterskluwer.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 45
Were the subjects analyzed in Were the instruments used to statistics. He says that he didn’t
the group to which they were measure the outcomes valid and take courses in graduate school
randomly assigned? Rebecca reliable? The overall measure in on conducting statistical analysis;
sees the term intention-to-treat the MERIT study is the compos- rather, he learned about different
analysis in the study and says that ite of the individual outcomes: statistical tests in courses that re-
it sounds like statistical language. CR, HMR, and unplanned ad­ quired students to look up how
Carlos confirms that it is; it means missions to the ICU (UICUA). to interpret a statistic whenever
that the researchers kept the hos- These parameters were defined they encountered it in the articles
pitals in their assigned groups reasonably and didn’t include do they were reading. Thus he had a
when they ­con­ducted the analysis, not resuscitate (DNR) cases. Car- context for how the statistic was
a technique intended to reduce los explains that since DNR cases being used and interpreted, what
possible bias. Even though the are more likely to code or die, in- question the statistical analysis
MERIT study used this technique, cluding them in the HMR and was answering, and what kind of
Carlos notes that in the discussion CR would artificially increase data were being analyzed. He also
section the authors offer some these outcomes and introduce learned to use a search engine,
important caveats about how the bias into the findings. such as Google.com, to find an
study was conducted, including As the team moves through explanation for any statistical
poor intervention implementation, the questions in the RCA check- tests with which he was unfamil-
which may have contributed to list, Rebecca wonders how she iar. Because his goal was to un-
MERIT’s unexpected findings.1 and Chen would manage this derstand what the statistic meant
Was the control group appro- kind of appraisal on their own. clinically, he looked for simple
priate? Carlos explains that it’s Carlos assures them that they’ll Web sites with that same focus
challenging to establish an ap­ get better at recognizing well- and avoided those with Greek
propriate comparison or control ­conducted research the more symbols or extensive formulas
group without an understanding RCAs they do. Though Rebecca that were mostly concerned with
of how the intervention will be feels less than confident, she appre- conducting statistical analysis.
implemented. In this case, it may ciates his encouragement nonethe- How large is the intervention
be problematic that the interven- less, and chooses to lead the team or treatment effect? As the team
tion group received education in discussion of the next question. goes through the studies in their
and training in implementing the Were the demographics and RCA, they decide to construct a
RRT and the control group re- baseline clinical variables of the list of statistics terminology for
ceived no comparable placebo subjects in each of the groups quick reference (see A Sampling of
(meaning education and training similar? Rebecca says that the Statistics). The major statistic used
about something else). But Car­ intervention group and the con- in the MERIT study is the odds
los reminds the team that the re- trol or comparison group need to ratio (OR). The OR is used to
searchers attempted to control be similar at the beginning of any provide insight into the measure
for known confounding variables intervention study because any of association between an inter-
by stratifying the sample on char- differences in the groups could vention and an outcome. In the
acteristics such as academic versus influence the outcome, poten- MERIT study, the control group
nonacademic hospitals, bed size, tially increasing the risk that the did better than the intervention
and other important parameters. outcome might be unrelated to the group, which is contrary to what
This method helps to ensure intervention. She refers the team was expected. Rebecca notes that
equal representation of these pa- to their earlier discussion about the researchers discussed the pos-
rameters in both the intervention confounding variables. Carlos sible reasons for this finding in the
and control groups. However, a tells Rebecca that her explana- final section of the study. Carlos
major concern for clinicians con- tion was excellent. Chen remarks says that the authors’ discussion
sidering whether to use the that Rebecca’s focus on learning about why their findings occurred
MERIT findings in their decision appears to be paying off. is as important as the findings
making involves the control hos- themselves. In this study, the
pitals’ code teams and how they WHAT ARE THE RESULTS? discussion communicates to any
may have functioned as RRTs, As the team moves on to the sec- clinicians considering initiating
which introduces a potential con- ond major question, Carlos tells an RRT in their hospital that they
founder into the study that could them that many clinicians are should assess whether the current
possibly invalidate the findings. ­apprehensive about interpreting code team is already functioning

46 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com


A Sampling of Statistics
Statistic Simple Definition Important Parameters Understanding the Statistic Clinical Implications

Odds Ratio The odds of an • If an OR is equal to 1, then the The OR for hospital-wide mor- From the HMR OR data
(OR) outcome occurring intervention didn’t make a differ- tality rates (HMR) in the MERIT alone, a clinician may not
in the intervention ence. study was 1.03 (95% CI, feel confident that a rapid
group compared • Interpretation depends on the out- 0.84 – 1.28). The odds of response team (RRT) is the
with the odds of come. HMR in the intervention group best intervention to reduce
it occurring in the • If the outcome is good (for exam- were about the same as HMR HMR but may seek out other
comparison or ple, fall prevention), the OR is pre- in the comparison group. evidence before making a
control group. ferred to be above 1. decision.
• If the outcome is bad (for example,
mortality rate), the OR is preferred
to be below 1.
Relative Risk The risk of an out- • If an RR is equal to 1, then the The RR of cardiopulmonary ar­ The RRT significantly reduced
(RR) come occurring intervention didn’t make a differ- rest in adults was reported in the RR of cardiopulmonary
in the intervention ence. the Chan PS, et al., 2010 sys- arrest in this study. From
group compared • Interpretation depends on the out- tematic reviewa as 0.66 (95% these data, clinicians can be
with the risk of it come. CI, 0.54 – 0.80), which is sta- reasonably confident that ini-
occurring in the • If the outcome is good (for example tistically significant because tiating an RRT will reduce CR
comparison or fall prevention), the RR is preferred there’s no 1.0 in the CI. in hospitalized adults.
control group. to be above 1. Thus, the RR of cardiopulmo-
• If the outcome is bad (for example, nary arrest occurring in the
mortality rate), the RR is preferred interven­tion group compared
to be below 1. with the RR of it occurring in the
control group is 0.66, or less
than 1. Since cardiopulmonary
arrest is not a good outcome,
this is a desirable finding.
Confidence The range in •C  I provides the precision of the See the two previous examples. In the Chan PS, et al., 2010
Interval (CI) which clinicians study finding: a 95% CI indicates systematic review,a the CI is a
can expect to get that clinicians can be 95% con- close range around the study
results if they pres- fident that their findings will be finding and is statistically
ent the interven- within the range given in the study. significant. Clinicians can be
tion as it was in •C  I should be narrow around the 95% confident that if they
the study. study finding, not wide. conduct the same interven-
• If a CI contains the number that tion, they’ll have a result simi-
indicates no effect (for OR it’s 1; for lar to that of the study (that is,
effect size it’s 0), the study finding a reduction in risk of cardio-
is not statistically significant. pulmonary arrest) within the
range of the CI, 0.54 – 0.80.
The narrower the CI range,
the more confident clinicians
can be that, using the same
intervention, their results will
be close to the study findings.
Mean (X) Average  aveat: Averaging captures only
•C In the Dacey M J , et al., 2007 Introducing an RRT decreased
those subjects who surround a study,a before the RRT the aver- the average CR by more than
central tendency, missing those age (mean) CR was 7.6 per 50% (7.6 to 3 per 1,000
who may be unique. For example, 1,000 discharges per month; discharges per month).
the mean (average) hair color in a after the RRT, it decreased to
classroom of schoolchildren cap- 3 per 1,000 discharges per
tures those with the predominant month.
hair color. Children with hair color
different from the predominant hair
color aren’t captured and are con-
sidered outliers (those who don’t
converge around the mean).

a
For study details on Chan PS, et al., and Dacey MJ, et al., go to http://links.lww.com/AJN/A11.

ajn@wolterskluwer.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 47


as an RRT prior to RRT imple- What are the risks and ben- objections, the patients or fami-
mentation. efits of the treatment? Chen won- lies will be asked to reveal them.
How precise is the interven- ders how to answer this since the The EBP team finally com-
tion or treatment? Chen wants to findings seem to be confounded pletes the RCA checklists for the
tackle the precision of the findings by the fact that the control hos- 15 studies and finds them all to
and starts with the OR for HMR, pital had code teams that func- be “keepers.” There are some
CR, and UICUA, each of which tioned as RRTs. She wonders if studies in which the find­ings are
has a confidence interval (CI) that there was any consideration of less than reliable; in the case of
includes the number 1.0. In an the risks and benefits of initiating MERIT, the team decides to in-
EBP workshop, she learned that an RRT prior to beginning the clude it anyway because it’s con-
a 1.0 in a CI for OR means that study. Carlos says that the study sidered a landmark study. All
the results aren’t statistically sig- doesn’t directly mention it, but the studies they’ve retained have
nificant, but she isn’t sure what the consideration of the risks and something to add to their under-
statistically sig­nificant means. Car- benefits of an RRT is most likely standing of the impact of an RRT
los explains that since the CIs for what prompted the researchers on CR, HMR, and UICUA. Car-
the OR of each of the three out- to conduct the study. It’s helpful los says that now that they’ve
comes contains the number 1.0, to remember, he tells the team, ­determined the 15 studies to be
these results could have been ob- that often the answer to these somewhat valid and reliable, they
tained by chance and therefore questions is more than just “yes” can add the rest of the data to the
aren’t statistically significant. For or “no.” evaluation table.
clinicians, chance findings aren’t Is the treatment feasible in my Be sure to join the EBP team
reliable findings, so they can’t clinical setting? Carlos acknowl- for “Critical Appraisal of the Evi-
confidently be put into practice. edges that because the nursing dence: Part III” in the next install-
Study findings that aren’t statisti- administration is open to their ment in the series, when Rebecca,
cally significant have a probabil- project and supports it by provid- Chen, and Carlos complete their
ity value (P value) of greater than ing time for the team to conduct synthesis of the 15 studies and
0.5. Statistically significant find- its work, an RRT seems feasible determine what the body of evi-
ings are those that aren’t likely to in their clinical setting. The team dence says about implementing an
be obtained by chance and have discusses that nursing can’t be RRT in an acute care setting. ▼
a P value of less than 0.5. the sole discipline involved in the
project. They must consider how Ellen Fineout-Overholt is clinical pro­
WILL THE RESULTS HELP ME IN CARING to include other disciplines as part fessor and director of the Center for
the Advancement of Evidence-Based
FOR MY PATIENTS? of their next step (that is, the im­
Practice at Arizona State University in
The team is nearly finished with plementation plan). The team con­ Phoenix, where Bernadette Mazurek
their checklist for RCTs. The third siders the feasibility of getting all Melnyk is dean and distinguished foun-
and last major question addresses disciplines on board and how to dation professor of nursing, Susan B.
the applicability of the study— address several issues raised by the Stillwell is clinical associate professor
and program coordinator of the Nurse
how the findings can be used to researchers in the discussion sec- Educator Evidence-Based Practice
help the patients the team cares tion (see Rapid Critical Appraisal Mentorship Program, and Kathleen M.
for. Rebecca observes that it’s of the MERIT Study), particu- Williamson is associate director of
easy to get caught up in the de- larly if they find that the body of the Center for the Advancement of
Evidence-Based Practice. Contact
tails of the research methods and evidence indicates that an RRT
author: Ellen Fineout-Overholt, ellen.
findings and to forget about how does indeed reduce their chosen fineout-overholt@asu.edu.
they apply to real patients. outcomes of CR, HMR, and
Were all clinically important UICUA. REFERENCES
outcomes measured? Chen says What are my patients’ and 1. Hillman K, et al. Introduction of the
that she didn’t see anything in the their families’ values and expec- medical emergency team (MET) sys-
study about how much an RRT tations for the outcome and the tem: a cluster-randomised controlled
trial. Lancet 2005;365, 2091-7.
costs to initiate and how to com- treatment itself? Carlos asks
2. Wojdyla D. Cluster randomized trials
pare that cost with the cost of one Rebecca and Chen to discuss with and equivalence trials [PowerPoint
code or ICU admission. Carlos their patients and their patients’ presentation]. Geneva, Switzerland:
agrees that providing costs would families their opinion of an RRT Geneva Foundation for Medical
Education and Research; 2005. http://
have lent further insight into the and if they have any objections www.gfmer.ch/PGC_RH_2005/pdf/
results. to the intervention. If there are Cluster_Randomized_Trials.pdf.

48 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com

You might also like