You are on page 1of 9

Review

A review on systematic reviews of health information


system studies
Francis Lau,1 Craig Kuziemsky,2 Morgan Price,3 Jesse Gardner1
< An additional table is

published online only. To view


this file please visit the journal
online (www.jamia.org).
1

School of Health Information


Science, University of Victoria,
Victoria, British Columbia,
Canada
2
Telfer School of Management,
University of Ottawa, Ottawa,
Ontario, Canada
3
Department of Family Practice,
University of British Columbia,
Vancouver, British Columbia,
Canada
Correspondence to
Professor Francis Lau, School of
Health Information Science,
University of Victoria, Victoria,
British Columbia, Canada
V8W3P5; email: fylau@uvic.ca
An initial version of this review
was presented at the 11th
International Symposium of
Health Information Management
Research held in Halifax, Nova
Scotia, Canada in fall of 2006.
Received 6 April 2010
Accepted 31 August 2010

ABSTRACT
The purpose of this review is to consolidate existing
evidence from published systematic reviews on health
information system (HIS) evaluation studies to inform HIS
practice and research. Fifty reviews published during
1994e2008 were selected for meta-level synthesis.
These reviews covered five areas: medication
management, preventive care, health conditions, data
quality, and care process/outcome. After reconciliation
for duplicates, 1276 HIS studies were arrived at as the
non-overlapping corpus. On the basis of a subset of 287
controlled HIS studies, there is some evidence for
improved quality of care, but in varying degrees across
topic areas. For instance, 31/43 (72%) controlled HIS
studies had positive results using preventive care
reminders, mostly through guideline adherence such as
immunization and health screening. Key factors that
influence HIS success included having in-house systems,
developers as users, integrated decision support and
benchmark practices, and addressing such contextual
issues as provider knowledge and perception, incentives,
and legislation/policy.

INTRODUCTION
The use of information technology to improve
patient care continues to be a laudable goal in the
health sector. Some argue we are near the tipping
point where one can expect a steady rise in the
number of health information systems (HISs)
implemented and their intensity of use in different
settings, especially by healthcare providers at point
of contact.1 A number of European nations are
already considered leaders in the use of electronic
medical records in primary care, where physicians
have been using electronic medical records in their
day-to-day practice for over a decade.2 As for our
current state of HIS knowledge, a 2005 review by
Ammenwerth and de Keizer3 has identied 1035
HIS eld evaluation studies reported during
1982e2002. Over 100 systematic reviews have also
been published to date on various HIS evaluation
studies. Despite the impressive number of HIS
studies and reviews available, the cumulative
evidence on the effects of HIS on the quality of care
continues to be mixed or even contradictory. For
example, Han et al4 reported an unexpected rise in
mortality after their implementation of a computerized physician order entry (CPOE) system in
a tertiary care childrens hospital. Yet, Del Beccaro
et al5 found no association between increased
mortality and their CPOE implementation in
a pediatric intensive care unit. Even in a computerized hospital, Nebeker et al6 found that high
adverse drug event (ADE) rates persisted. However,

J Am Med Inform Assoc 2010;17:637e645. doi:10.1136/jamia.2010.004838

as demonstrated by Ash et al,7 CPOE effects can be


unpredictable because of the complex interplay
between the HIS, users, workows, and settings
involved. There is a need for higher level synthesis
to reconcile and make sense of these HIS evaluation
studies, especially those systematic review ndings
already published.
This review addresses the latter gap by
conducting a meta-level synthesis to reconcile the
HIS evidence base that exists at present. Our
overall aim is to consolidate published systematic
reviews on the effects of HIS on the quality of care.
This will help to better inform HIS practice and
research. In particular, this meta-level synthesis
offers three contributions to practitioners and
researchers involved with HIS implementation and
evaluation. Firstly, it provides a comprehensive
guide on the work performed to date, allowing one
to build on existing evidence and avoid repetition.
Secondly, by reconciling and reporting the systematic review ndings in a consistent manner, we
translate these synthesized reviews in ways that
are relevant and meaningful to HIS practitioners.
Lastly, the consolidated evidence provides a rational
basis for our recommendations to improve HIS
adoption and identify areas that require further
research.
In this paper, we rst describe the review method
used. Then we report the review ndings, emphasizing the meta-synthesis to make sense of the
published systematic reviews found. Lastly, we
discuss the knowledge and insights gained, and
offer recommendations to guide HIS practice and
research.

REVIEW METHOD
Research questions
This review is intended to address the current need
for a higher level synthesis of existing systematic
reviews on HIS evaluation studies to make sense of
the ndings. To do so, we focused on reconciling
the published evidence and comparing the evaluation metrics and quality criteria of the multiple
studies. Our specic research questions were: (1)
What is the cumulative effect of HIS based on
existing systematic reviews of HIS evaluation
studies? (2) How was the quality of the HIS studies
in these reviews determined? (3) What evaluation
metrics were used in the HIS studies reviewed? (4)
What recommendations can be made from this
meta-synthesis to improve future HIS adoption
efforts? (5) What are the research implications?
Through this review, we aimed to synthesize the
disparate HIS review literature published to date in
ways that are rigorous, meaningful, and useful to
HIS practitioners and researchers. At the same time,
637

Review
by examining the quality of the HIS studies reviewed and the
evaluation metrics used, we should be able to improve the rigor
of planning, conduct, and critique of future HIS evaluation
studies and reviews.

Review identification and selection


An extensive search of systematic review articles on HIS eld
evaluation studies was conducted by two researchers using
Medline and Cochrane Database of Systematic Reviews covering
1966e2008. The search strategy combined terms in two broad
themes of information systems and reviews: the former included
information technology, computer system, and such MeSH
headings as electronic patient record, decision support, and
reminder system; the latter included systematic review, literature review, and review. The search was repeated by a medical
librarian to ensure all known reviews had been identied. The
reference sections of each article retrieved were scanned for
additional reviews to be included. A hand search of key health
informatics journals was carried out by the lead researcher, and
known personal collections of review articles were included.
The inclusion criteria used in this review focused on published
systematic reviews in English on HIS used by healthcare
providers in different settings. The meaning of HIS was broadly
dened on the basis of the categories of Ammenwerth and de
Keizer3 to cover different types of systems and tools for information processing, decision support, and management
reporting, but excluded telemedicine/telehealth applications,
digital devices, systems used by patients, and those for patient/
provider education. The reason for such exclusion was that
separate reviews were planned in these areas for subsequent
publication. All citation screening and article selection were
performed independently by two researchers and a second
librarian. Discrepancies in the review process were resolved by
consensus among the two researchers, and subsequently
conrmed by the second librarian.

Meta-synthesis of the reviews


The meta-level synthesis involved reconciliation of key aspects
of the systematic review articles through consensus by two
researchers to make sense of the cumulative evidence. The metasynthesis involved six steps: (1) the characteristics of each
review were summarized by topic areas, care settings, HIS
features, evaluation metrics, and key ndings; (2) the assessment
criteria used in the reviews to appraise the quality of HIS studies
were compared; (3) the evaluation metrics used and the effects
reported were categorized according to an existing HIS evaluation framework; (4) duplicate HIS studies from the reviews were
reconciled to arrive at a non-overlapping corpus; (5) the aggregate effects of a subset of non-overlapping controlled HIS
studies from selected topic areas were tabulated by HIS features
and metrics already used as organizing schemes in the reviews;
(6) factors identied in the reviews that inuenced HIS success
were consolidated and reported.
Specically, the type and relationship of specic HIS features,
metrics, and their effects on quality of care were summarized
using the methods and outputs found in the existing HIS
reviews. Five predened topic areas for medication management,
preventive care, health conditions, data quality, and care
process/outcome were used. These topics were adapted from the
organizing schemes used in the reviews by Balas et al,8 Cramer
et al,9 and Garg et al10 which covered multiple healthcare
domains. The existing HIS evaluation framework used was the
Canada Health Infoway Benets Evaluation (BE) Framework
already adopted in Canada.11 This is similar to the approach
638

used by van der Meijden et al12 in categorizing a set of evaluation


attributes from 33 clinical information systems according to the
Information System (IS) Success model by DeLone and
McLean13 on which the Infoway BE Framework was based.
To identify the subset of controlled HIS studies and their
effects, two researchers worked independently to retrieve the
full articles for all original HIS studies within the corpus to
extract the data on designs, metrics, and results. To aggregate
HIS effects, the vote-counting method applied in four reviews
was used to tally the number of positive/neutral/negative
studies based on signicant differences between groups.8 10 14 15
In studies with multiple measures, Gargs method was adopted
where $50% of the results should be signicant to be counted as
positive.10 To visualize the aggregate effects, Dorrs method was
applied to plot the frequency of positive, neutral, and negative
studies in stacked bar graphs.14 The two researchers worked
independently on the aggregate analysis and reconciled the
outputs through consensus afterwards.

REVIEW FINDINGS
Synopsis of HIS reviews
Our initial library database and hand searches returned over 1200
citation titles/abstracts. By applying and rening the inclusion/
exclusion criteria, we eventually identied 136 articles for further
screening. Of these 136 articles, 58 were considered relevant and
reviewed in detail. Of the 78 rejected articles, 23 were telehealth/
telemedicine-related, 14 were patient-oriented systems, 11 were
conceptual papers, seven had insufcient detail, seven involved
other types of technologies, seven were not systematic reviews,
ve were on personal digital assistant devices, and four had HIS as
only one of the interventions examined. Twenty-nine (50%) of
the 58 selected review articles were published since 2005. Most
had lead authors from the USA (22 (38%)) and UK (16 (28%)).
The remaining reviews were from Canada (six (10%)), France (ve
(9%)), the Netherlands (four (7%)), Australia (three (5%)), Austria
(one (2%)), and Belgium (one (2%)). Further examination of the
58 reviews showed that eight were updates or summaries of
earlier publications. Hence, our nal selection consisted of 50
review articles,8e10 14e60 which included the eight updated/
summary reviews instead of the original versions.61e68 The
review selection process is summarized in gure 1.
A synopsis of the 50 reviews by topic, author, care setting,
study design, evaluation metric, and key ndings is shown in
table 1, available as an online data supplement at www.jamia.
org. The HIS features in these reviews varied widely, ranging
from the types of information systems and technologies used,
the functional capabilities involved, to the intent of these
systems. Examples are the review of administrative registers,19
reminders,27 and diabetes management,32 respectively. A variety
of care settings were reported, including academic/medical
centers, hospitals, clinics, general practices, laboratories, and
patient homes. Most of the studies were randomized controlled
trials and quasi-experimental and observational studies,
although some were qualitative or descriptive in nature.18 30 56 59
In terms of evaluation metrics and study ndings, most
reviews included tables to show the statistical measures and
effects as reported in the original eld studies. These measures
and effects were mostly related to detecting signicant betweengroup differences in guideline compliance/adherence, utilization
rates, physiologic values, and surrogate/clinical outcomes.
Examples include cancer screening rates,38 clinic visit frequencies,14 hemoglobin A1c levels,32 lengths of stay,54 adverse
events,40 and death rates.55 Four reviews on data quality
J Am Med Inform Assoc 2010;17:637e645. doi:10.1136/jamia.2010.004838

Review
study design for potential selection, performance, attrition, and
detection bias.69 The second extends the assessment to include
the reporting of such aspects as inclusion/exclusion criteria,
power calculation, and main/secondary effect variables. The third
is on HIS data/feature quality by comparing specic HIS features
against some reference standards. An example of the rst
approach is the Johnston ve-item scale with 0e2 points each
based on the method of allocation to study groups, unit of allocation, baseline group differences, objectivity of outcome with
blinding, and follow-up for analysis.62 An example of the second
is the 20-item scale by Balas et al32 which includes the study site,
sampling and size, randomization, intervention, blinding of
patients/providers/measurements, main/secondary effects, ratio/
timing of withdrawals, and analysis of primary/secondary variables. The third example is the Jaeschke et al70 four-item checklist
for data accuracy based on sample representativeness, independent/blind comparison against a reference standard not affected
by test results, and reproducible method/results.

Types of evaluation metrics used

Figure 1 Review selection method. IS, information system; HIS,


health information system; PDA, personal digital assistant.
reported predictive values and sensitivity/specicity rates.19 35 39 56
Most of the reviews were narrative, with no pooling of the individual study results. Six reviews summarized their individual
studies to provide aggregate assessment of whether the HIS had
led to improvement in provider performance and patient
outcome.8 14 15 29 30 32 For instance, Garg et al10 assigned a yes/no
value to each HIS study depending on whether $50% of its evaluation metrics had signicant differences. Only nine (18%) reviews
included meta-analysis of aggregate effects.9 21 24 27 28 32 42 51 55 The
metrics used in these nine meta-analyses were odds/risk ratios and
standardized mean differences with CIs shown as forest plots;
eight included summary statistics to describe the aggregate effects,
seven adjusted for heterogeneity (four xed effect,9 21 27 55 two
random28 32 and one mixed51), and three included funnel plots for
publication bias.9 42 51

Assessment of methodological quality


Of the 50 reviews included in the synthesis, 31 (62%) mentioned
they had conducted an assessment of the methodological quality
of the HIS studies as part of their review. Of these 31 reviews, 20
included the individual quality rating of each HIS study in the
article or via a website. For quality assessment instruments,
there were 16 different variations of 14 existing quality scales
and checklists reported, while eight others were created by
review authors on an ad hoc basis. Thirteen of these 24 quality
assessment instruments had items with numeric ratings that
added up to an overall score, while the remaining 11 were in the
form of checklists mostly with items for yes/no responses. Of
the 14 existing instruments mentioned, the most common was
the ve-item scale from Johnston et al,62 which was used in nine
reviews.17 10 15 41 43 48 61 63 65
Further examination of the 24 instruments revealed three broad
approaches. The rst is based on the evidence-based medicine and
Cochrane Review paradigm that assesses the quality of a HIS
J Am Med Inform Assoc 2010;17:637e645. doi:10.1136/jamia.2010.004838

To make sense of the HIS evaluation metrics from the 50


reviews, we applied the Infoway BE Framework11 as an organizing scheme from which we could categorize the measures in
meaningful ways. The BE Framework explains how information, system, and service quality can affect the use of an HIS
and user satisfaction, which in turn can inuence the net benets
that are realized over time. In this framework, net benets are
measured under the dimensions of healthcare quality, provider
productivity, and access to care. Measures that did not t into the
existing BE dimensions were grouped under new categories that
we created on the basis of the types of measures and effects
involved. A summary of the evaluation metrics from the 50
reviews under the BE Framework dimensions of system, information and service quality, HIS usage and satisfaction, and net
benets of care quality, productivity and access are shown in
table 1. The additional categories of evaluation metrics identied
in our meta-synthesis are shown in table 2.
In table 1, under the HIS quality dimensions, most of the
evaluation metrics reported were on system function and
information content. Examples of functionality include evaluation of: CPOE with integrated, stand alone or no-decision
support features24 29; commercial HIS compared with homegrown systems22 24; and the accuracy of decision support triggers such as medication alerts.9 23 Examples of information
content metrics were related to the accuracy, completeness, and
comprehension of electronic patient data collected.19 20 Under
the HIS use dimensions, most of the measures were on actual
HIS use, provider satisfaction, and usability.14 20 25 Under the
net benets dimensions, most of the measures were around care
quality and provider productivity. For care quality, the most
common measures in the patient safety category were medical
errors and reportable and drug dosing-related events.20 22 24 In
the appropriateness and effectiveness category the most
common measures were adherence/compliance to guidelines and
protocols.8 27 In the health outcomes category the most
common measures include mortality/morbidity, length of stay,
and physiological and psychological measures.8 60 For provider
productivity, the most common measures in the efciency
category were resource utilization and provider time spent, timeto-care and service turnaround time.8 27 32 In the net cost
category, different types of healthcare costs, especially hospital
and drug charges, were among the common measures.8 32 45
Table 2 shows the measures from the reviews that did not t
into the dimensions/categories under the BE Framework. The
639

Review
Table 1

Mapping factors from HIS studies to the benefits evaluation framework

HIS quality

HIS use

Net benefits

SYSTEM QUALITY
< Functionalitydfeatures, DS levels
HIS6DS15 24 29 30 33 40e42 50 53 54 59 60
Commercial versus home grown10 22 24 30
HIS accuracy9 23 29 30 33 34 42 59
< Performancedaccess, reliability,
response time
None
< Securitydfeatures, levels of support
Secure access20
INFORMATION QUALITY
< Contentdcompleteness, accuracy,
comprehension
Accuracy/completeness14 19 20 25 35 37 39

USAGE

CARE QUALITY
< Patient safetydAE, surveillance, risk reduction
Medical errors/reportable events16 20 22 24 29 30 33

43 45 56e59

< Availabilitydtimeliness, reliability,

consistency
None
SERVICE QUALITY
< Servicedresponsiveness of support
None

< Use behavior/patterndactual

system use
Actual HIS use9 20 25 29 30 36 40e42 44 46
< Self-reported usedperceived
system use
Perceived improvement29 58
< Intention to usednon-user proportion/
readiness
None
SATISFACTION
< Competencydknowledge, skills,
expertise
Provider knowledge44
< User perceptiondexpectations,
experiences
Provider satisfaction20 25 29 30 43 53
< Ease of useduser-friendliness,
learnability
Usability14 25 29 30 53 57

58

35 40 46

49 52 54 60

Drug dosing9

10 15 21 22 28 30e32 40 43 48 49 52 54 55

< Appropriateness and effectivenessdguidelines, care

continuity, practice standards


Adherence/compliance8e10 14e18 20 22 25e27 29 30 32 33 37 38
41 43 44 49 51 53 54 58

< Health outcomesdsurrogate, clinical, status

Mortality/morbidity/LOS13 14 20 21 23 25 27 28 30 32 33 37 46 48
49 52 54 55 57

Physio/psychological measures8e10

14 17 20 21 32 36 43 44

48e50 53e55

Quality of life32 9 14 36
PRODUCTIVITY
< Efficiencydutilization, outputs, capacity
Resource utilization8e10 14e18 20e22 25 27 29

30 32 33 36 37

41e44 47e49 52e54 57 58

Provider time20 22 25 29 30 32 33 37 43 46e48 57 58


Time-to-care/turnaround10 22 23 33 48 49 53 58
< Care coordinationdcontinuity, team care
Communication33 58
< Net costdavoidance, reduction, savings
Healthcare cost8 9 14 16e18 20 22 23 29 30 33 36 43 46 47 49 52
53 58

ACCESS
< Accessdservice availability/accessibility, patient and
provider participation, self-care
Availability/accessibility (None)
Participation/self-care communication25
Patient-initiated/self-care8 16 17 20 32
AE, adverse event; DS, decision support; HIS, health information system; LOS, lenth of stay.

most common measures were related to patients/providers,


such as their knowledge, attitude, perception, compliance,
decision condence, overall satisfaction, and relationships.9 36 43
Another group of measures were implementation related
including barriers, training, organizational support, project
management, leadership and cost.20 25 32 40 Others were related
to legislation/policy, such as mandate and condentiality,20 58
as well as the correlation between HIS features with extent of
changes and intended effects.14 15
Finally, we created a visual diagram in gure 2 to show the
frequency distribution of HIS studies for the evaluation dimensions/categories examined in the reviews. From this gure one can
see that efciency, health outcomes, and patient safety are the
three categories with the most HIS studies reported. Conversely,
there is little to no study for such categories as care coordination,
user competency, information availability, and service quality.

Non-overlapping review corpus


The 50 reviews in our meta-synthesis covered 2122 HIS studies.
However, many of these studies were duplicates, as they
appeared in more than one review. For instance, the 1999
CPOE study by Bates et al71 was appraised in seven different
reviews.22 24 30 40 46 49 60 The 50 reviews covered the topics of
medication management, preventive care, health conditions, and
data quality, plus an assortment of care process/management.
There were multiple reviews published in each of these ve
areas, and they all had overlapping HIS studies. For example,
there were 13 reviews with 275 HIS studies on medication
management. But only 206 of these studies were unique, as the
remaining 69 were duplicates. Some studies were reviewed
differently, not only from a methodological standpoint, but also
in the indicators examined. Four of the reviews under care
process/outcome each contained 100 or more HIS studies in
multiple domains.10 15 20 22 Yet, many of these studies were also
640

contained in the reviews under the four other topic areas


mentioned. For instance, the review by Garg et al10 on clinical
decision support systems (CDSS) had 100 HIS studies covering
the domains of diagnosis, prevention, disease management, drug
dosing, and prescribing. However, only eight (8%) were unique
studies72e79 that had not already appeared in the other reviews.
As part of our meta-synthesis, we reconciled the 50 reviews to
eliminate duplicate HIS studies to arrive at a non-overlapping
corpus. When the HIS studies appeared in multiple reviews, we
included them just once in the most recent review under
a specic topic where possible. After the reconciliation, we
arrived at 1276 non-overlapping HIS studies. Next we took the
30 reviews under the four topic areas of medication management, preventive care, health conditions, and data quality to
examine the HIS and effects reported. The 20 reviews under care
process/outcome were not included as they were too diverse for
meaningful categorization and comparison. Upon closer examination, we found that over half of the 709 non-overlapping HIS
studies in these 30 reviews contained descriptive results, insufcient detail, no control groups, patient/paper systems or special
devices, which made it infeasible to tabulate the effects. For
example, we eliminated 24 of the 67 HIS studies in the review of
Eslami et al30 as they had no controls or insufcient detail for
comparison. After this reconciliation, we reduced the 709 nonoverlapping HIS studies to 287 controlled HIS studies for the 30
reviews under the four topic areas.

Associating HIS features, metrics with effects


The cumulative effects by HIS features from the 30 reviews for
the four topic areas are shown in table 3 and summarized in gure
3A. In table 3, the overall ratio of positive controlled HIS studies is
180/287 (62.7%). The most effective HIS features were computerbased reminder systems in preventive care (100%), CDSS
reminders/alerts in medication management (80%), and disease
J Am Med Inform Assoc 2010;17:637e645. doi:10.1136/jamia.2010.004838

Review
Table 2 Additional measures not found in the benefits evaluation
framework
Category

HIS evaluation metrics

Patient/provider

Patient knowledge, attitude,


perception, decision confidence,
compliance
Patient/provider overall satisfaction
Patient/provider knowledge
acquisition, relationship
Provider attitude, perceptions,
autonomy, experience
and performance
Workflow
Reimbursement mix, degree
of capitation
Barriers, training, organizational
support, time-to-evaluation,
lessons, success factors, project
management, leadership, costs
Privacy, security, legislations,
mandates, confidentiality
Correlation of HIS feature/use
with change in process,
outcome, success
Data quality improvement,
reduced loss/paper and
transcription errors,
DS improvement
Information exchange and
interoperability, standards

Incentives
Implementation

Legislation/policy
Correlation

Change/
improvement

Interoperability

Review reference
sources
8 9 32 36 43 44 53 57

20 25 32 36 43 53 58
9 25 43 57
10 25 43 57

14 20 30
22 40
14 20 22 25 40 43 45 49

14 20 25 40 43 58
14 15 49 54

35 57 59

20

DS, decision support.

management-orders/alerts in health conditions (80%). The HIS


features that were somewhat effective included CPOE medication orders (66.1%), reminders in printed form (69.6%), and
reminders combined with other interventions (66.7%). Facilitybased electronic patient record (EPR) systems and administrative
registers/research databases had better data quality than primarycare EPR systems (76.2% and 70.4% vs 58.3%). Note that 98/287
(34.1%) of these controlled HIS studies reported no signicant

effects, mostly in the area of disease management where 30/57


(52.6%) had neutral ndings.
Next, the cumulative effects by evaluation measures from the
30 reviews for the four topic areas are shown in table 4 and
summarized in Figure 3B. In total, 575 evaluation measures were
reported in the 287 controlled HIS studies. Table 4 shows that
the overall ratio of HIS metrics with positive effects is 313/575
(54.4%). The HIS metrics with positive effects are mostly under
the dimension of care quality in patient safety for medication
errors (63.6%), and in guideline adherence for immunization
(84.6%), health screening (66.7%), tests/assessments/care
(64.4%), and medications (61.8%). Under information quality,
76.4% of HIS metrics had positive effects in content accuracy,
and 61.0% were positive in completeness. Note that 244/575
(42.4%) of HIS metrics showed no signicant effects, mostly in
the areas of health outcomes, adverse event detection, and
resource utilization.

Summary of key findings


The take-home message from this review is that there is some
evidence for improved quality of care, but in varying degrees
across topic areas. For instance, HIS with CPOE and CDSS were
effective in reducing medication errors, but not those for drug
dosing in maintaining therapeutic target ranges or ADE monitoring because of high signal-to-noise ratios. Reminders were
effective mostly through preventive care guideline adherence. The
quality of electronic patient data was generally accurate and
complete. Areas where HIS did not lead to signicant improvement included resource utilization, healthcare cost, and health
outcomes. However, in many instances, the studies were not
designed nor had sufcient power/duration to properly assess
health outcomes. For provider time efciency, four of 12 studies
reported negative effect where HIS required more time and effort
to complete the tasks. Caution is needed when interpreting these
ndings, because there were wide variations in organizational
contexts and how the HIS were designed/implemented, used, and
perceived. In some cases, the HIS was only part of a complex set

Figure 2 Distribution of health


information system studies by
evaluation dimensions/categories.

J Am Med Inform Assoc 2010;17:637e645. doi:10.1136/jamia.2010.004838

641

Review
Table 3 Frequency of positive, neutral, and negative controlled health
information system (HIS) studies by reported HIS features
HIS features

Positive
(%)

Medication management
CPOE medication orders
41
CDSS reminders/alerts/feedback
12
Drug dosing/prescribing
11
Adverse drug event monitoring
2
Subtotal
66
Preventive care
Remindersdcomputer
5
Remindersdprinted
16
Reminders+other
10
interventionsdprinted
Subtotal
31
Health conditions
Diagnostic aiddabdominal/chest pain
2
Disease managementddiabetes
7
Disease managementdhypertension
7
Disease managementdother
7
conditions
Disease managementdorders/alerts
4
Subtotal
27
Data quality
EPR in primary care
21
Facility-based EPR
16
Admin registers/research databases
19
Subtotal
56
Total
180

Neutral
(%)

Negative
(%)

Total

17
3
10
3
33

4
0
0
0
4

(6.5)
(0.0)
(0.0)
(0.0)
(3.9)

62
15
21
5
103

(100.0)
(69.6)
(66.7)

0 (0.0) 0 (0.0)
7 (30.4) 0 (0.0)
5 (33.3) 0 (0.0)

5
23
15

(72.1)

12 (27.9) 0 (0.0)

43

(28.6)
(50.0)
(58.3)
(36.8)

5
7
5
12

(0.0)
(0.0)
(0.0)
(0.0)

7
14
12
19

(80.0)
(47.4)

1 (20.0) 0 (0.0)
30 (52.6) 0 (0.0)

5
57

(58.3)
(76.2)
(70.4)
(66.7)
(62.7)

12
3
8
23
98

(66.1)
(80.0)
(52.4)
(40.0)
(64.1)

(27.4)
(20.0)
(47.6)
(60.0)
(32.0)

(71.4)
(50.0)
(41.7)
(63.2)

(33.3)
(14.3)
(29.6)
(27.4)
(34.1)

0
0
0
0

3
2
0
5
9

(8.3)
(9.5)
(0.0)
(6.0)
(3.1)

36
21
27
84
287

Values are number (%).


CDSS, clinical decision support systems; CPOE, computerized physician order entry;
EPR, electronic patient record.

of interventions that included changes in clinical workow,


provider behavior, and scope of practice.

DISCUSSION
Cumulative evidence on HIS studies
This review extends the HIS evidence base in three signicant
ways. Firstly, our synopsis of the 50 HIS reviews provide a critical assessment of the current state of knowledge on the effects
of HIS in medication management, health conditions, preventive care, data quality, and care process/outcome. Our concise
summary of the selected reviews in supplementary online
table 1 can guide HIS practitioners in planning/conducting HIS
evaluation studies by drawing on approaches used by others and
comparing their results with what is already known in such
areas as electronic prescribing,24 drug dosing,28 preventive care
reminders,26 and EPR quality.56
Secondly, the grouping of evaluation metrics from the 50 HIS
reviews according to the Infoway BE Framework (which is based
on DeLones IS Success Model13) provides a coherent scheme
when implementing HIS to make sense of the different factors
that inuence HIS success. Through this review, we also found
additional factors not covered by the BE Framework that
warrant its further renement (refer to table 2). These factors
include having in-house systems, developers as users, integrated
decision support, and benchmark practices. Important contextual factors include: patient/provider knowledge, perception and
attitude; implementation; improvement; incentives; legislation/
policy; and interoperability.
Thirdly and most importantly, our meta-synthesis produced
a non-overlapping corpus of 1276 HIS studies from the 50
reviews and consolidated the cumulative HIS effects in four
642

healthcare domains with a subset of 287 controlled studies. This


is a signicant milestone that has not been attempted previously. To illustrate, many of the 50 reviews were found
subsumed by the more recent Garg et al,10 Nies et al,15 Chaudhry
et al,22 and Car et al20 reviews which cover 100, 106, 257, and 284
HIS studies in multiple domains, respectively. Yet with these
four comprehensive reviews, it was difcult to integrate their
ndings in a meaningful way because of signicant overlapping
of the HIS studies. The ndings were also reported in different
forms, making comparison even more challenging. In contrast,
our organizing scheme for associating HIS features, metrics, and
effects using a non-overlapping corpus as shown in Figure 3A, B
provide a concise and quantiable way of consolidating review
ndings that is relevant and meaningful to HIS practitioners.

Recommendations to improve HIS adoption


We believe the cumulative evidence from this meta-synthesis
provides the contexts needed to guide future HIS adoption efforts.
For example, our consolidated ndings suggest there is evidence of
improved quality in preventive care reminders and CPOE/CDSS
for medication management. As such, one may focus on replicating successful HIS adoption efforts from benchmark institutions such as those described in the Dexheimer et al26 and
Ammenwerth et al24 reviews for reminders and e-prescribing,
respectively, by incorporating similar HIS features and practices
into the local settings. Conversely, in drug dosing, ADE monitoring, and disease management, where the evidence from our
synthesis is variable, attention may shift to redesigning HIS
features/workows and addressing contextual barriers that have
hindered adoption, as described in the van der Sijs et al,59 Bates et
al45 and Dorr et al14 reviews. The distribution of HIS studies by
evaluation dimension from our meta-synthesis (refer to gure 2)
shows that the areas requiring ongoing research attention are HIS
technical performance, information availability, service quality,
user readiness (intention to use HIS), user competency, care
access/availability, and care coordination. In particular, the shift
toward team-based care, as shown in the review of van der Kam et
al,58 will require the careful implementation of HIS to facilitate
effective communication and information sharing across the care
continuum, which is not well addressed at present.80 Given the
importance of contexts in HIS adoption as suggested in the
Chaudhry et al22 and Car et al20 reviews, practitioners and
researchers should refer to specic HIS studies in the corpus that
are similar to their organizational settings and practices for
comparison and guidance.
Drawing on this cumulative evidence, we have three recommendations to improve HIS adoption. Firstly, to emulate
successful HIS benchmark practices, one must pay attention to
specic HIS features and key factors that are critical to making
the system workable. To do so, frontline healthcare providers
must be engaged on an ongoing basis to ensure the HIS can be
integrated into the day-to-day work practice to improve their
overall performance. The HIS must be sufciently adaptable
over time as providers gain experience and insights on how best
to use more advanced HIS features such as CDSS and reminders.
Secondly, there should be a planned and coordinated approach to
addressing the contextual issues. The metrics identied as
extensions to the Infoway BE Framework on patients/providers,
incentives, change management, implementation, legislation/
policy, interoperability, and correlation of HIS features/effects
are all issues that must be addressed as needed. Thirdly, one has
to demonstrate return-on-value by measuring the clinical
impact. Evaluation should be an integral part of all HIS adoption efforts in healthcare organizations. Depending on the stage
J Am Med Inform Assoc 2010;17:637e645. doi:10.1136/jamia.2010.004838

Review

Figure 3 (A) Frequency of positive, neutral and negative controlled health information system (HIS) studies by reported HIS features. (B) Frequency
of positive, neutral, and negative controlled HIS studies by reported HIS metrics. CDSS, clinical decision support systems; CPOE, computerized
physician order entry; EPR, electronic patient record; LOS, length of stay.
of HIS adoption, appropriate evaluation design and metrics
should be used to examine the contexts, quality, use, and effects
of the HIS involved. For example, organizations in the process of
implementing an HIS should conduct formative evaluation
studies to ensure HISepractice t and sustained use through
ongoing feedback and adaptation of the system and contexts.
When a HIS is already in routine use, summative evaluation
with controlled studies and performance/outcome-oriented
metrics should be used to determine the impact of HIS usage.
Qualitative methods should be included to examine subjective
effects such as provider/patient perceptions and unintended
consequences that may have emerged.

Implications for HIS research


Given the amount of evidence already in existence, it is important to build on such knowledge without duplicating effort.
Researchers interested in conducting reviews on the effects of
specic HIS could benet from our review corpus by leveraging
J Am Med Inform Assoc 2010;17:637e645. doi:10.1136/jamia.2010.004838

what has already been reported to avoid repetition. Those


wishing to conduct HIS evaluation studies could consider our
organizing schemes for categorizing HIS features, metrics, and
effects to improve their consistency and comparability across
studies. The variable ndings across individual studies evaluating equivalent HIS features suggest that further research is
needed to understand how these systems should be designed.
Even having HIS features such as CDSS in medication
management with strong evidence does not guarantee success,
and indeed, may cause harm.81 Research into the characteristics of success using such methods as participatory design,82
usability engineering,83 and project risk assessment84 will be
critical to planning and guiding practitioners in successful
implementations. Also, further research into the nature of
system design, as suggested in the Kawamoto et al41 review
(eg, usability, user experience, and contextualized process
analysis), could help to promote safer and more effective HIS
design.
643

Review
Table 4 Frequency of positive, neutral and negative controlled health
information system (HIS) studies by reported HIS metrics
HIS metrics

Positive
(%)

Neutral
(%)

Care quality
Patient safetydmedication errors
42 (63.6) 21
Patient safetydadverse events
0 (0.0)
9
Patient safetydtarget therapeutic
23 (51.1) 21
ranges
Practice standardsdprovider
7 (46.7)
8
performance
Guideline adherencedcancer
34 (54.0) 29
screening
Guideline adherencedhealth
8 (66.7)
4
screening
Guideline adherencedimmunization
11 (84.6)
2
Guideline adherencedtests/
29 (64.4) 16
assessments/care
Guideline adherencedmedications
34 (61.8) 21
Health outcomesdmortality/
11 (31.4) 23
morbidity/LOS
Health outcomesdphysio/
6 (17.1) 29
psychological measures
Health outcomesdquality of life
0 (0.0)
5
Subtotal
205 (51.5) 188
Provider productivity
Efficiencydresource utilization
7 (38.9) 11
Efficiencydprovider time/time-to-care
6 (50.0)
2
Healthcare cost
12 (50.0) 10
Subtotal
25 (46.3) 23
User satisfaction
User perceptiondexperiences,
1 (33.3)
2
knowledge
Information quality
Contentdaccuracy
55 (76.4) 17
Contentdcompleteness
25 (61.0) 11
Contentdoverall quality
2 (28.6)
3
Subtotal
82 (68.3) 31
Total
313 (54.4) 244

Negative
(%)
Total

(31.8)
(100.0)
(46.7)

3 (4.5)
0 (0.0)
1 (2.2)

66
9
45

(53.3%)

0 (0.0)

15

(46.0)

0 (0.0)

63

(33.3)

0 (0.0)

12

(15.4)
(35.6)

0 (0.0)
0 (0.0)

13
45

(38.2)
(65.7)

0 (0.0)
1 (2.9)

55
35

(82.9)

0 (0.0)

35

(100.0)
(47.2)

0 (0.0)
5 (1.3)

5
398

(61.1)
(16.7)
(41.7)
(42.6)

0
4
2
6

(66.7)

0 (0.0)

CONCLUSIONS
This meta-synthesis shows there is some evidence for improved
quality of care from HIS adoption. However, the strength of this
evidence varies by topic, HIS feature, setting, and evaluation
metric. While some areas, such as the use of reminders for guideline adherence in preventive care, were effective, others, notably in
disease management and provider productivity, showed no
signicant improvement. Factors that inuence HIS success
include having in-house systems, developers as users, integrated
decision support and benchmark practices, and addressing such
contextual issues as provider knowledge and perception, incentives, and legislation/policy. Drawing on this evidence to establish
benchmark practices, especially in non-academic settings, is an
important step towards advancing HIS knowledge.
Acknowledgments We acknowledge Dr Kathryn Hornbys help as the medical
librarian on the literature searches.
Funding Support for this review was provided by the Canadian Institutes for Health
Research, Canada Health Infoway and the College of Pharmacists of British Columbia.
Competing interests None.
Provenance and peer review Not commissioned; externally peer reviewed.

(0.0)
(33.3)
(8.3)
(11.1)

18
12
24
54
3

REFERENCES
1.
2.
3.

(23.6)
(26.8)
(42.9)
(25.8)
(42.4)

0
5
2
7
18

(0.0)
(12.2)
(28.6)
(5.8)
(3.1)

72
41
7
120
575

LOS, length of stay.

This meta-synthesis has shown that different methodological


quality assessment instruments were applied in the reviews.
There was considerable variability across reviews when the same
studies were assessed. These instruments need to be streamlined
to provide a consistent approach to appraising the quality of HIS
studies. For example, the Johnston et al62 ve-item quality scale
could be adopted as the common instrument, as it is already
used in 10 reviews. The analysis and reporting of HIS evaluation
ndings in the reviews also require work. The current narrative
approach to summarizing evaluation ndings lacks a concise
synthesis for HIS practitioners, yet more sophisticated techniques such as meta-analysis are not easy to comprehend.
Further work is needed on how one can organize review ndings
in meaningful ways to inform HIS practice.

Review limitations
There are limitations to this meta-synthesis. Firstly, only English
review articles in scientic journals were included; we could have
missed reviews in other languages and those in gray literature.
Secondly, we excluded reviews in telemedicine/telehealth, patient
systems, education interventions, and mobile devices; their
inclusion may have led to different interpretations. Thirdly, our
organizing schemes and vote-counting methods for correlating
HIS features, metrics, and effects were simplistic, which may not
have reected the intricacies associated with specic HIS and
evaluation ndings reported. Lastly, our meta-analysis covered
644

a wide range of complex issues, and could be viewed as ambitious


and inadequate for addressing them in a substantive manner.

4.
5.
6.
7.
8.
9.

10.
11.
12.
13.
14.
15.
16.
17.
18.
19.

Berner ES, Detmer DE, Simborg D. Will the wave finally break? A brief view of the
adoption of electronic medical records in the United States. J Am Med Inform Assoc
2005;12:3e7.
Schoen C, Osborn R, Huynh PH, et al. On the front lines of care: Primary care
physician office systems, experiences and views in seven countries. Health Aff
2006;25:w555e71.
Ammenwerth E, de Keizer N. An inventory of evaluation studies of information
technology in health care. Methods Inf Med 2005;44:44e56.
Han YY, Carcillo JA, Venkataraman ST, et al. Unexpected increased mortality after
implementation of a commercially sold computerized physician order entry system.
Pediatrics 2005;116:1506e12.
Del Becarro MA, Jeffries HE, Eisenberg MA, et al. Computerized provider order
entry implementation: no association with increased mortality rates in an intensive
care unit. Pediatrics 2006;119:290e5.
Nebeker JR, Hoffman JM, Weir CR, et al. High rates of adverse drug events in
a highly computerized hospital. Arch Intern Med 2005;165:1111e16.
Ash JS, Sittig DF, Dykstra RH, et al. Categorizing the unitended sociotechnical
consequences of computerized provider order entry. Int J Med Inf 2007;76S:S21e7.
Balas EA, Austin SM, Mitchell JA, et al. The clinical value of computerized information
services. A review of 98 randomized clinical trials. Arch Fam Med 1996;5:271e8.
Cramer K, Hartling L, Wiebe N, et al. Computer-based delivery of health evidence:
a systematic review of randomised controlled trials and systematic reviews of the
effectiveness on the process of care and patient outcomes. Alberta Heritage
Foundation (Final Report), Jan 2003.
Garg AX, Adhikari NK, McDonald H, et al. Effects of computerized clinical decision
support systems on practitioner performance and patient outcomes: a systematic
review. JAMA 2005;293:1223e38.
Lau F, Hagens S, Muttitt S. A proposed benefits evaluation framework for health
information systems in Canada. Healthcare Quarterly 2007;10:112e18.
Van der Meijden MJ, Tange HJ, Hasman TA. Determinants of success of inpatient clinical
information systems: A literature review. J Am Med Inform Assoc 2003;10:235e43.
DeLone WH, McLean ER. The DeLone and McLean model of information system
success: a ten year update. J Manag Info Systems 2003;19:9e30.
Dorr D, Bonner LM, Cohen AN, et al. Informatics systems to promote improved care for
chronic illness: a literature review. J Am Med Inform Assoc 2007;14:156e63.
Nie`s J, Colombet I, Degoulet P, et al. Determinants of success for computerized
clinical decision support systems integrated into CPOE systems: a systematic review.
Proc AMIA Symp 2006:594e8.
Bennett JW, Glasziou PP. Computerised reminders and feedback in medication
management: a systematic review of randomised controlled trials. Med J Aust
2003;178:217e22.
Bryan C, Austin Boren S. The use and effectiveness of electronic clinical decision
support tools in the ambulatory/primary care setting: a systematic review of the
literature. Inform Prim Care 2008;16:79e91.
Buntinx F, Winkens R, Grol R, et al. Influencing diagnostic and preventive
performance in ambulatory care by feedback and reminders, a Review. J Fam Pract
1993;10:219e28.
Byrne N, Regan C, Howard L. Administrative registers in psychiatric research:
a systematic review of validity studies. Acta Psychiatr Scand 2005;112:409e14.

J Am Med Inform Assoc 2010;17:637e645. doi:10.1136/jamia.2010.004838

Review
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.

Car J, Black A, Anandan A, et al. The Impact of eHealth on the Quality & Safety of
Healthcare: A Systemic Overview & Synthesis of the Literature. Report for the NHS
Connecting for Health Evaluation Programme 2008.
Chatellier G, Colombet I, Degoulet P. An overview of the effect of computerassisted management of anticoagulant therapy on the quality of anticoagulation. Int J
Med Inf 1998;49:311e20.
Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information
technology on quality, efficiency, and costs of medical care. Ann Intern Med
2006;144:742e52.
Colombet I, Chatellier G, Jaulent MC, et al. Decision aids for triage of patients with chest
pain: a systematic review of field evaluation studies. Proc AMIA Symp 1999:231e5.
Ammenwerth E, Schnell-Inderst P, Machan C, et al. The effect of electronic
prescribing on medication errors and adverse drug events: a systematic review. J Am
Med Inform Assoc 2008;15:585e600.
Delpierre C, Cuzin L, Fillaux J, et al. A systematic review of computer-based patient
record systems and quality of care: more randomized clinical trials or a broader
approach? Int J Qual Health Care 2004;16:407e16.
Dexheimer JW, Talbot TR, Sanders DL, et al. Prompting clinicians about preventive
care measures: a systematic review of randomized controlled trials. J Am Med
Inform Assoc 2008;15:311e20.
Austin SM, Balas EA, Mitchell JA, et al. Effect of physician reminders on preventive
care: meta-analysis of randomized clinical trials. Proc Annu Symp Comput Appl Med
Care 1994:121e4.
Durieux P, Trinquart L, Colombet I, et al. Computerized advice on drug dosage to
improve prescribing practice. Cochrane Database Syst Rev 2008;(3):CD002894.
Eslami S, Abu-Hanna A, De Keizer N. Evaluation of outpatient computerized
physician medication order entry systems: a systematic review. J Am Med Inform
Assoc 2007;14:400e6.
Eslami S, De Keizer N, Abu-Hanna A. The impact of computerized physician
medication order entry in hospitalized patientsda systematic review. Int J Med Inf
2008;77:365e76.
Fitzmaurice DA, Hobbs FD, Delaney BC, et al. Review of computerized decision support
systems for oral anticoagulation management. Br J Haematol 1998;102:907e9.
Balas EA, Krishna S, Kretschmer RA, et al. Computerized knowledge management
in diabetes care. Med Care 2004;42:610e21.
Georgiou A, Williamson M, Westbrook JI, et al. The impact of computerised
physician order entry systems on pathology services: a systematic review. Int J Med
Inf 2007;76:514e29.
Handler SM, Altman RL, Perera S, et al. A Systematic Review of the Performance
Characteristics of Clinical Event Monitor Signals Used to Detect Adverse Drug Events
in the Hospital Setting. J Am Med Inform Assoc 2007;14:451e8.
Hogan WR, Wagner MM. Accuracy of data in computer-based patient records.
J Am Med Inform Assoc 1997;4:342e55.
Jackson CL, Bolen S, Brancati FL, et al. A systematic review of interactive
computer-assisted technology in diabetes care. J Gen Intern Med 2006;21:105e10.
Jerant AF, Hill DB. Does the use of electronic medical records improve surrogate
patient outcomes in outpatient settings? J Fam Pract 2000;49:349e57.
Jimbo M, Nease DE Jr. Ruffin 4th MT, Rana GK. Information technology and cancer
prevention. CA Cancer J Clin 2006;56:26e36.
Jordan K, Porcheret M, Croft P. Quality of morbidity coding in general practice
computerized medical records: a systematic review. J Fam Pract 2004;21:396e412.
Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry
and clinical decision support systems on medication safety: a systematic review.
Arch Intern Med 2003;163:1409e16.
Kawamoto K, Houlihan CA, Balas EA, et al. Improving clinical practice using clinical
decision support systems: a systematic review of trials to identify features critical to
success. BMJ 2005;330:765e72.
Liu JLY, Wyatt JC, Deeks JJ, et al. Systematic reviews of clinical decision tools for
acute abdominal pain. Health Technol Assess (NHS R&D HTA) 2006;10:1e167.
Mitchell E, Sullivan F. A descriptive feast but an evaluative famine: systematic review of
published articles on primary care computing during 1980-97. BMJ 2001;322:279e82.
Montgomery AA, Fahey T. A systematic review of the use of computers in the
management of hypertension. J Epidemiol Community Health 1998;52:520e5.
Bates DW, Evans RS, Murff H, et al. Detecting adverse events using information
technology. J Am Med Inform Assoc 2003;10:115e28.
Oren E, Shaffer ER, Guglielmo BJ. Impact of emerging technologies on medication
errors and adverse drug events. Am J Health-Syst Pharm 2003;60:1447e58.
Poissant L, Pereira J, Tamblyn R, et al. The impact of electronic health records on
time efficiency of physicians and nurses: a systematic review. J Am Med Inform
Assoc 2005;12:505e16.
Randell R, Mitchell N, Dowding D, et al. Effects of computerized decision support
systems on nursing performance and patient outcomes: a systematic review.
J Health Serv Res Policy 2007;12:242e9.
Rothschild J. Computerized physician order entry in the critical care and general
inpatient setting: a narrative review. J Crit Care 2004;19:271e8.
Sanders DL, Aronsky D. Biomedical applications for asthma care: a systematic
review. J Am Med Inform Assoc 2006;13:418e27.
Shea S, DuMouchel W, Bahamonde L. A meta-analysis of 16 randomized controlled
trials to evaluate computer-based clinical reminder systems for preventive care in the
ambulatory setting. J Am Med Inform Assoc 1996;3:399e409.
Shebl NA, Franklin BD, Barber N. Clinical decision support systems and antibiotic
use. Pharm World Sci 2007;29:342e9.

J Am Med Inform Assoc 2010;17:637e645. doi:10.1136/jamia.2010.004838

53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.

76.
77.
78.
79.
80.
81.
82.
83.
84.

Shiffman RN, Liaw Y, Brandt CA, et al. Computer-based guideline implementation


systems: a systematic review of functionality and effectiveness. J Am Med Inform
Assoc 1999;6:104e14.
Sintchenko V, Magrabi F, Tipper S. Are we measuring the right end-points?
Variables that affect the impact of computerised decision support on patient
outcomes: A systematic review. Med Inform Internet Med 2007;32:225e40.
Tan K, Dear PRF, Newell SJ. Clinical decision support systems for neonatal care.
Cochrane Database Syst Rev 2008;(2):CD004211.
Thiru K, Hassey A, Sullivan F. Systematic review of scope and quality of electronic
patient record data in primary care. BMJ 2003;326:1070e4.
Urquhart C, Currell R, Grant MJ, et al. Nursing record systems: effects on nursing
practice and healthcare outcomes. Cochrane Database Syst Rev 2008;(1):CD002099.
Van der Kam WJ, Moorman PW, Koppejan-Mulder MJ. Effects of electronic
communication in general practice. Int J Med Inf 2000;60:59e70.
Van der Sijs H, Aarts J, Vulto A, et al. Overriding of drug safety alerts in
computerized physician order entry. J Am Med Inform Assoc 2006;13:138e47.
Wolfstadt JL, Gurwitz JH, Field TS, et al. The effect of computerized physician order
entry with clinical decision support on the rates of adverse drug events: a systematic
review. J Gen Intern Med 2008;23:451e8.
Hunt DL, Haynes RB, Hanna SE, et al. Effects of computer-based clinical decision
support systems on physician performance and patient outcomes. J Am Med Inform
Assoc 1998;280:1339e46.
Johnston ME, Langton KB, Haynes RB, et al. Effects of computer-based clinical
decision support systems on clinician performance and patient outcome. A critical
appraisal of research. Ann Intern Med 1994;120:135e42.
Kawamoto K, Lobach DF. Clinical decision support provided within physician order
entry systems: A systematic review of features effective for changing clinician
behavior. AMIA Annu Symp Proc 2003:361e5.
Shekelle PG, Morton SC, Keeler EB. Costs and benefits of health information
technology. Evid Rep Technol Assess (Full Rep) 2006:1e71.
Sullivan F, Mitchell E. Has general practitioner computing made a difference to
patient care? A systematic review of published reports. BMJ 1995;311:848e52.
Tan K, Dear PRF, Newell SJ. Clinical decision support systems for neonatal care.
Cochrane Database Syst Rev 2005;(2):CD004211.
Walton RT, Dovey S, Harvey E, et al. Computer support for determining drug dose:
systematic review and meta-analysis. BMJ 1999;318:984e90.
Walton RT, Harvey E, Dovey S, et al. Computerized advice on drug dosage to
improve prescribing practice. Cochrane Database Syst Rev 2001;(1):CD002894.
Higgins JPT, Green S, eds. Cochrane Handbook for Systematic Reviews of
Interventions Version 5.0.2 [updated September 2009]. The Cochrane Collaboration,
2009. Available from www.cochrane-handbook.org (accessed 20 August 2010).
Jaeschke R, Guyatt G, Sackett DL. Users guides to the medical literature, III. How
to use an article about a diagnostic test. A. Are the results of the study valid?
Evidence-based Medicine Working Group. JAMA 1994;271:389e91.
Bates D, Teich J, Lee J, et al. The impact of computerized physician order entry on
medication error prevention. J Am Med Inform Assoc 1999;6:313e21.
Abbrecht PH, OLeary TJ, Behrendt DM. Evaluation of a computer-assisted method
for individualized anticoagulation: retrospective and prospective studies with
a pharmacodynamic model. Clin Pharmcol Ther 1982;32:129e36.
Hales JW, Gardner RM, Jacobson JT. Factors impacting the success of
computerized preadmission screening. Proc Annu Symp Comput Appl Med Care
2005:728e32.
Horn W, Popow C, Miksch S, et al. Development and evaluation of VIE-PNN,
a knowledge-based system for calculating the parenteral nutrition of newborn
infants. Artif Intell Med 2002;24:217e28.
Selker HP, Beshansky JR, Griffith JL, et al. Use of the acute cardiac ischemia timeinsensitive predictive instrument (ACI-TIPI) to assist with triage of patients with chest
pain or other symptoms suggestive of acute cardiac ischemia: a multicentre,
controlled clinical trial. Ann Intern Med 1998;129:845e55.
Tang PC, LaRose MP, Newcomb C, et al. Measuring the effects of reminders for
outpatient influenze immunizations at the point of clinical opportunity. J Am Med
Inform Assoc 1999;6:115e21.
Thomas JC, Moore A, Qualls PE. The effect on cost of medical care for patients
treated with an automated clinical audit system. J Med Syst 1983;7:307e13.
Wexler JR, Swender PT, Tunnessen WW Jr, et al. Impact of a system of computerassisted diagnosis. Am J Dis Child 1975;129:203e5.
Wyatt JR. Lessons learnt from the field trial of ACORN, an expert system to advise
on chest pain. In: Barber B, Cao D, Quin D, Wagner G, eds. Proceedings of the 6th
World Conference on Medical Informatics. Singapore: North-Holland, 1989:111e15.
Tang PC. Key Capabilities of an Electronic Health Record SystemdLetter Report.
Committee on Data Standards for Patient Safety. http://www.nap.edu/catalog/
10781.html (accessed 10 Nov 2009).
Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry
systems in facilitating medication errors. J Am Med Inform Assoc 2005;293:1197e203.
Vimarlund V, Timpka T. Design participation as insurance: risk management and
end-user participation in the development of information system in healthcare
organizations. Methods Inf Med 2002;41:76e80.
Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the
evaluation of clinical information systems. J Biomed Inform 2005;37:56e76.
Pare G, Sicotte C, Joana M, et al. Prioritizing the risk factors influencing the success
of clinical information system projects: A Delphi study in Canada. Methods Inf Med
2008;47:251e9.

645

You might also like