You are on page 1of 8

ORIGINAL CONTRIBUTION

Comparison of 30-Day Mortality Models for Profiling Hospital Performance in Acute Ischemic Stroke With vs Without Adjustment for Stroke Severity
Gregg C. Fonarow, MD Wenqin Pan, PhD Jeffrey L. Saver, MD Eric E. Smith, MD, MPH Mathew J. Reeves, PhD Joseph P. Broderick, MD Dawn O. Kleindorfer, MD Ralph L. Sacco, MD DaiWai M. Olson, PhD Adrian F. Hernandez, MD, MHS Eric D. Peterson, MD, MPH Lee H. Schwamm, MD given to defining the quality and value of health care through reporting of process and outcome measures.1,2 National quality profiling efforts have begun to report hospitallevel performance for Medicare beneficiaries, including 30-day mortality rates, for common medical conditions, including acute myocardial infarction, heart failure, and communityacquired pneumonia.3-5 These outcome measures have been adopted by accreditation organizations, the Centers for Medicare & Medicaid Services (CMS), and other payers, and rewards based on risk-adjusted outcomes have been included in health care reform legislation.6,7 Because stroke is among the leading causes of death, disability, hospitalizations, and health care expenditures in the United States,8 there is inFor editorial comment see p 292. Context There is increasing interest in reporting risk-standardized outcomes for Medicare beneficiaries hospitalized with acute ischemic stroke, but whether it is necessary to include adjustment for initial stroke severity has not been well studied. Objective To evaluate the degree to which hospital outcome ratings and potential eligibility for financial incentives are altered after including initial stroke severity in a claims-based risk model for hospital 30-day mortality for acute ischemic stroke. Design, Setting, and Patients Data were analyzed from 782 Get With The GuidelinesStroke participating hospitals on 127 950 fee-for-service Medicare beneficiaries with ischemic stroke who had a score documented for the National Institutes of Health Stroke Scale (NIHSS, a 15-item neurological examination scale with scores from 0 to 42, with higher scores indicating more severe stroke) between April 2003 and December 2009. Performance of claims-based hospital mortality risk models with and without inclusion of NIHSS scores for 30-day mortality was evaluated and hospital rankings from both models were compared. Main Outcomes Measures Model discrimination, hospital 30-day mortality outcome rankings, and value-based purchasing financial incentive categories. Results Across the study population, the mean (SD) NIHSS score was 8.23 (8.11) (median, 5; interquartile range, 2-12). There were 18 186 deaths (14.5%) within the first 30 days, including 7430 deaths (5.8%) during the index hospitalization. The hospital mortality model with NIHSS scores had significantly better discrimination than the model without (C statistic, 0.864; 95% CI, 0.861-0.867, vs 0.772; 95% CI, 0.769-0.776; P .001). Among hospitals ranked in the top 20% or bottom 20% of performers by the claims model without NIHSS scores, 26.3% were ranked differently by the model with NIHSS scores. Of hospitals initially classified as having worse than expected mortality, 57.7% were reclassified to as expected by the model with NIHSS scores. The net reclassification improvement (93.1%; 95% CI, 91.6%-94.6%; P .001) and integrated discrimination improvement (15.0%; 95% CI, 14.6%-15.3%; P .001) indexes both demonstrated significantenhancementofmodelperformanceaftertheadditionofNIHSS.Explainedvariance and model calibration was also improved with the addition of NIHSS scores. Conclusion Adding stroke severity as measured by the NIHSS to a hospital 30-day risk model based on claims data for Medicare beneficiaries with acute ischemic stroke was associated with considerably improved model discrimination and change in mortality performance rankings for a substantial portion of hospitals.
JAMA. 2012;308(3):257-264 Author Affiliations: Division of Cardiology (Dr Fonarow), Department of Neurology (Dr Saver), University of California, Los Angeles; Duke Clinical Research Center, Durham, North Carolina (Drs Pan, Olson, Hernandez, and Peterson); Department of Clinical Neurosciences, Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada (Dr Smith); Department of Epidemiology, Michigan State University, East Lansing (Dr Reeves); Department of Neurology, University of Cincinnati Academic Health www.jama.com Center, Cincinnati, Ohio (Drs Broderick and Kleindorfer); Miller School of Medicine, University of Miami, Miami, Florida (Dr Sacco); and Department of Neurology, Massachusetts General Hospital, Boston (Dr Schwamm). Corresponding Author: Gregg C. Fonarow, MD, Ahmanson-UCLA Cardiomyopathy Center, Ronald Reagan UCLA Medical Center, 10833 LeConte Ave, Room 47-123 CHS, Los Angeles, CA 90095-1679 (gfonarow @mednet.ucla.edu). JAMA, July 18, 2012Vol 308, No. 3 257

NCREASING ATTENTION HAS BEEN

2012 American Medical Association. All rights reserved.

Downloaded From: http://jama.jamanetwork.com/ on 05/07/2013

HOSPITAL PERFORMANCE IN ACUTE ISCHEMIC STROKE

creasing interest in also reporting outcomes for Medicare beneficiaries hospitalized with acute ischemic stroke.9,10 Risk adjustment for case mix is considered essential for accurately assessing and reporting hospital-level outcomes.3-5,11 The risk-adjustment models currently used by CMS incorporate data exclusively from administrative claims.3-5 Although claims data risk models for acute myocardial infarction, heart failure, and communityacquired pneumonia were validated against clinical data,3-5 adequate casemix adjustment for acute ischemic stroke by claims data may be particularly difficult, and current models do not include adjustment for stroke severity. It has been previously shown in patient-level analyses that stroke severity as indexed by the National Institutes of Health Stroke Scale (NIHSS) is an important predictor of mortality in acute ischemic stroke. 12-15 However, whether adjustment for stroke severity is necessary for hospital-level risk profiling has not been well studied. Using data from hospitals participating in the Get With The Guidelines Stroke (GWTG-Stroke) program on acute ischemic stroke admissions linked to Medicare data, this study was designed to (1) evaluate the change in model performance by adding or not adding NIHSS score for predicting hospital-level 30-day all-cause mortality for Medicare beneficiaries with acute ischemic stroke and (2) determine whether there are meaningful differences in hospital ranking with the use of models with and without adjustment for NIHSS score. METHODS Clinical data including stroke severity were obtained from GWTG-Stroke, and administrative claims data were obtained from CMS. Data from the GWTG-Stroke registry were linked with enrollment files and inpatient claims from CMS for the period April 1, 2003, through December 31, 2009. Follow-up continued through 2010. The design, inclusion criteria, and data col258 JAMA, July 18, 2012Vol 308, No. 3

lection methods for GWTG-Stroke have been described previously. 16,17 Patients were eligible for inclusion in the GWTG-Stroke registry if they were admitted for acute stroke. Trained hospital personnel ascertained acute ischemic stroke admissions by either prospective clinical identification, retrospective identification using discharge codes from the International Classification of Diseases, Ninth Revision, Clinical Modification ( ICD-9CM), or a combination. Patient data abstracted by trained hospital personnel included demographics, medical history, in-hospital treatment and events, discharge treatment and counseling, mortality, and discharge destination. Admission staff, medical staff, or both recorded race/ethnicity, usually as the patient was registered. Prior studies have suggested differences in outcomes based on race/ethnicity. All patient data were deidentified before submission. All states and regions of the United States were represented, and a variety of centers participated, from community hospitals to large tertiary centers. Data on hospital-level characteristics were obtained from the American Hospital Association.18 All participating institutions were required to comply with local regulatory and privacy guidelines and, as determined by each participating institution, to obtain institutional review board approval. Outcome Sciences served as the registry coordinating center. The Duke Clinical Research Institute served as the data analysis center and has an agreement to analyze the aggregate deidentified data for research purposes. The institutional review board of the Duke University Health System approved the study. The CMS files (100% Medicare Research Identifiable Files) included data for all fee-for-service Medicare beneficiaries aged 65 years or older who were hospitalized with a diagnosis of acute stroke (ICD-9-CM codes 430.x, 431.x, 433.x, 434.x, and 436.x). We linked patient data in the GWTG-Stroke registry with Medicare Part A inpatient claims, matching by admission and dis-

charge dates, hospital, date of birth, and sex using methods previously described.15,17,19,20 Patients in Medicare managed care plans (15%-25% of the population depending on the region of the country) or other types of insurance are not included in fee-forservice Medicare claims files and therefore cannot be matched.15,17,19,20 Patients from centers with fewer than 25 ischemic stroke patients with NIHSS score documented during the study period were excluded to minimize the likelihood of sampling error.
NIHSS

The NIHSS is a 15-item neurologic examination stroke scale used to provide a quantitative measure of strokerelated neurologic deficit by evaluating the effect of acute ischemic stroke on the levels of consciousness, language, neglect, visual-field loss, extraocular movement, motor strength, ataxia, dysarthria, and sensory loss.15,21 The NIHSS is designed to be a simple, valid, and reliable tool that can be administered at the bedside consistently by physicians, nurses, or therapists. Each item is scored with 3 to 5 grades, with 0 as normal and the final total score having a potential range of 0 to 42, with higher scores indicating greater stroke severity. The first recorded NIHSS score, as close to admission time as possible, was collected.
Hospital-Level Outcome

The outcome of interest was hospitallevel all-cause mortality within 30 days from time of admission. Deaths and dates of death were obtained, with complete ascertainment, from the CMS vital status files.15,17,19,20
30-Day Mortality Model Derivation

The approach to risk model development without the NIHSS score was to closely follow the previously described approach used by the Yale New Haven Health System/Center for Outcomes Research and Evaluation and CMS for other publicly reported 30day mortality3-6 and described in de-

2012 American Medical Association. All rights reserved.

Downloaded From: http://jama.jamanetwork.com/ on 05/07/2013

HOSPITAL PERFORMANCE IN ACUTE ISCHEMIC STROKE

veloping a model for 30-day riskstandardized mortality for acute ischemic stroke.10 This approach to risk adjustment generally follows the principles stated in the American Heart Association Scientific Statement Standards for statistical models used for public reporting of health outcomes.11 The candidate variables for the risk model are patient-level risk adjustors that are expected to be predictive of mortality, based on empirical analysis, prior literature, and clinical judgment, including demographic factors (age, sex) and indicators of comorbidity.11 For each patient, the candidate variables considered for this model were derived from the Medicare claims files and included secondary diagnosis and procedure codes from the index hospitalization and from the principal and secondary diagnosis codes from hospitalizations, institutional outpatient visits, and physician encounters in the 12 months before the index hospitalization.3-6 The model is intended to adjust for case differences based on the diverse aspects of the clinical status of acute ischemic stroke patients at time of admission. Condition categories are drawn from more than 15 000 ICD9-CM diagnosis codes.11 The final set of risk-adjustment variables for the claims-based model was selected to be aligned with those included in the proposed CMS acute ischemic stroke 30-day mortality measure. The final 87 variables are shown in eTable 1 (available at http://www .jama.com). The approach to model development with the NIHSS score included was methodologically identical, except for the addition of a measure of disease severity (NIHSS). The NIHSS score was treated as a continuous parameter. Hierarchical generalized logistic regression models were used to model the binary outcome of mortality within 30 days of admission as a function of patient demographic and clinical characteristics and a random hospital-specific effect. This strategy accounts for within-hospital correlation of the observed outcomes. Additional details are provided in the eMethods.

Statistical Analysis

Discrimination of the base claims model without NIHSS score was assessed by determining the C statistic and was compared with the discrimination of the claims model with NIHSS score. We used the integrated discrimination improvement (IDI) index to measure how the model that included NIHSS score reclassified patients compared with the model without NIHSS score.22,23 A higher IDI index indicates a greater improvement in risk discrimination and improved reclassification. The net reclassification improvement (NRI) index, which compares the shifts in reclassified categories by observed outcome, resulting from the addition of NIHSS score to the model was also determined. A higher NRI index indicates a greater improvement in risk discrimination and improved reclassification. We also ranked hospitals by their 30day adjusted mortality from each model and plotted the agreement between these rankings. We ranked these intercepts to group hospitals into 3 categories, top 20%, middle 60%, and bottom 20%, and compared the results across models. We chose these categories because they reflect categories that may be relevant to pay-for-performance programs, in which the top 20% of hospitals are eligible for bonus payments and the bottom 20% of hospitals may be subject to a payment penalty.7 We also grouped hospitals into the top 5%, middle 90%, and bottom 5% and compared the results across models. As an additional approach, hospitals with the 95% credible intervals of the estimated random intercepts not covering the null point are considered to have performance that is significantly better or worse than the average hospital. These categories are analogous to the portions of hospitals identified in Hospital Compare as having better than, no different than, or worse than expected 30-day riskstandardized mortality rates.6 All P values are 2-sided, with P .05 considered statistically significant. SAS software version 9.2 (SAS Institute) was used for all analyses.

RESULTS There were 693 458 hospitalizations of patients with acute ischemic stroke enrolled in GWTG-Stroke between April 2003 and December 2009. From 472 443 hospitalizations of patients aged 65 years or older, we matched 318 393 patients (67.4%) to fee-forservice Medicare claims from 1428 hospitals. Of these acute ischemic stroke patients, there were 143 481 (45.1%) with NIHSS score documented. We further confined the population to patients first index stroke admission during the study period (n = 138 314), nontransferred patients with acute ischemic stroke (n = 131 404), and patients from centers with fewer than 25 ischemic stroke patients with NIHSS score documented. This resulted in a final study population of 127 950 acute ischemic stroke patients from 782 GWTG-Stroke hospitals. Among these hospitals, there were 124 428 nontransferred acute ischemic stroke patients without NIHSS score documented during the study period. The demographics, clinical characteristics, geographic distribution, and hospital characteristics of the study patients with and without NIHSS score recorded from the 782 hospitals were similar, with few exceptions (eTable 2). The characteristics of the 127 950 acute ischemic stroke patients with NIHSS score documented are shown in TABLE 1. The median age was 80 years, 57.3% were women, and 86.2% were white. Prior stroke or transient ischemic attack was present in 32.4% of patients. Comorbidities were common with hypertension in 82.6%, diabetes in 29.2%, coronary artery disease or prior myocardial infarction in 33.7%, and history of atrial fibrillation or flutter in 26.9%. The NIHSS median score in this overall population was 5 (interquartile range [IQR], 2-12). The median hospital-level NIHSS score was 5 (IQR, 4-7). Of the 782 GWTG-Stroke hospitals included in this study, median bed size was 377, all regions of the United States were represented, and teaching hospitals accounted for 21.3% of all hospitals and 28.1% of admisJAMA, July 18, 2012Vol 308, No. 3 259

2012 American Medical Association. All rights reserved.

Downloaded From: http://jama.jamanetwork.com/ on 05/07/2013

HOSPITAL PERFORMANCE IN ACUTE ISCHEMIC STROKE

Table 1. Patient and Hospital Characteristics of the Study Cohort of Medicare Beneficiaries With Acute Ischemic Stroke
Age, median (IQR), y Male sex, % Race/ethnicity, % White Black Hispanic Arrival mode, % Emergency medical services Private transport Off-hours arrival time, % a Atrial fibrillation/flutter,% Previous stroke/TIA, % CAD/prior MI, % Diabetes mellitus, % Peripheral vascular disease, % Hypertension, % Smoker, % Dyslipidemia, % Body mass index, median (IQR) b NIHSS total score, mean (SD) Median score (IQR) Discharge status, % Home Skilled nursing facility Rehabilitation Transfer to acute care facility Left against medical advice Hospice Died in hospital Ambulatory status c Ambulated independently With assistance Unable to ambulate 30-day mortality 30-day rehospitalization Hospital characteristics No. of beds, median (IQR) No. of stroke discharges, 2009 d 0-100 101-300 300 Geographic region d Northeast Midwest South West Hospital type d Nonteaching Teaching Primary stroke center e Study Cohort (n = 127 950) 80 (73-86) 42.7 86.2 9.5 1.4 66.2 31.8 50.9 26.9 32.4 33.7 29.2 5.7 82.6 10.9 42.1 25.8 (22.7-29.6) 8.23 (8.11) 5 (2-12) 36.4 24.7 26.0 1.7 0.2 5.1 5.8 41.3 35.7 21.6 14.5 12.0 377 (265-561) 21.1 (299) 65.0 (424) 12.8 (44) 29.1 (225) 13.6 (113) 42.9 (321) 14.5 (110) 72.0 (605) 28.1 (164) 69.0 (437)

Abbreviations: CAD, coronary artery disease; IQR, interquartile range; MI, myocardial infarction; NIHSS, National Institutes of Health Stroke Scale; TIA, transient ischemic attack. a Arrival outside of regular work hours from 7 AM to 6 PM Monday through Friday. b Calculated as weight in kilograms divided by height in meters squared. c Excludes patients who transferred out. d Data for these hospital characteristics are given as percentage of patients (No. of hospitals). Stroke discharge data were missing for 15 hospitals, and geographic region and hospital type were missing for 13 hospitals. e As certified by The Joint Commission.

sions (Table 1). There were 18 186 deaths (14.5%) within the first 30 days, including 7430 deaths during the index hospitalization (in-hospital mortality, 5.8%). The median hospitallevel 30-day mortality rate was 14.5% (IQR, 11.3%-17.9%). TABLE 2 reports the performance of the claims models without NIHSS score vs the claims model with NIHSS score for 30-day mortality among acute ischemic stroke patients at all registry hospitals in the analysis. Discrimination, calibration, and explained variance were substantially improved with the addition of NIHSS score. The hospital claims mortality model without NIHSS score had a C statistic of 0.772 (95% CI, 0.769-0.776), whereas the NIHSS score alone had a C statistic of 0.822 (95% CI, 0.819-0.825) and the claims model with the NIHSS score included had a C statistic of 0.864 (95% CI, 0.861-0.867; absolute difference for claims model with vs without NIHSS score, 0.091; 95% CI, 0.088-0.094; P .001). Explained variance improved over the model without NIHSS score (Table 2). Additional tests for model discrimination were improved with the addition of NIHSS score. The NRI index (93.1%; 95% CI, 91.6%-94.6%; P .001) and IDI index scores (discrimination slope for model with vs without NIHSS score, 27.7% vs 12.8%; difference, 15.0%; 95% CI, 14.6%15.3%; P .001; relative IDI, 1.17) all demonstrated substantially more accurate classification of hospital 30-day mortality after the addition of NIHSS score to the claims model. Although both claims models resulted in wide variance of predicted risk between the most extreme deciles, the model with NIHSS exhibited better agreement between observed and predicted mortality rates (Hosmer-Lemeshow goodnessof-fit test 2, 352.9 vs 173.8; P .001). Of the 782 hospitals in the analysis, the median absolute change in rank position was 79 places (IQR, 35-155) when hospitals were ranked with risk models without and with NIHSS score. The numbers of hospitals in which the

260

JAMA, July 18, 2012Vol 308, No. 3

2012 American Medical Association. All rights reserved.

Downloaded From: http://jama.jamanetwork.com/ on 05/07/2013

HOSPITAL PERFORMANCE IN ACUTE ISCHEMIC STROKE

ranking categories changed based on the 30-day mortality models with and without NIHSS score are shown in TABLE 3. Compared with the 20%/60%/ 20% rankings generated by the 2 models, the weighted was 0.585 (unweighted , 0.530), and the model without NIHSS score differed in category for 206 of 782 hospitals (26.3%) (Table 3). There was considerable disagreement between the models with and without NIHSS score regarding which hospitals were in the top 5 percentile with respect to the lowest riskadjusted mortality with weighted of only 0.533 (unweighted , 0.521). Of the 39 hospitals identified as top performing (top 5 percentile) by the model without NIHSS score, only 23 were

identified by the model with NIHSS score, and 16 other hospitals were newly identified (Table 3). There was even greater disagreement about the bottom-performing hospitals. Of the 40 bottom-performing hospitals according to the claims model without NIHSS score, 19 of these hospitals (47.5%) were no longer identified as being in the worst fifth percentile for mortality after applying the model that adjusted for NIHSS score. For the analysis of hospitals ranked as better than, no different than, and worse than expected for 30-day risk-standardized mortality, 9 of 22 hospitals (40.9%) identified as having better than expected mortality rates in the model without NIHSS score were reclassified to as-expected mortality rates in the model with NIHSS

score, and 15 of 26 hospitals (57.7%) classified as worse than expected outcomes were reclassified as to as expected (Table 3). The weighted was 0.502 and unweighted , 0.494. To evaluate whether there was selection bias introduced by analyzing the population of acute ischemic stroke patients with NIHSS score documented, the discrimination of the claims model in the overall population (n=252 379) with or without NIHSS score documented was evaluated. The C statistic (0.772; 95% CI, 0.769-0.774) was similar to that demonstrated for the patients with NIHSS score documented. COMMENT Using data from the GWTG-Stroke registry combined with administrative

Table 2. Performance of 30-Day Mortality Risk Models for Acute Ischemic Stroke Without and With NIHSS Score a
Predicted Event Rate by Decile of Predicted Risk, % C Statistic (95% CI) 0.772 (0.769-0.776) 0.864 (0.861-0.867) 0.091 (0.088-0.094) .001 Generalized R2 0.174 0.335 0.161 (0.155-0.167) .001 Lowest 5.94 4.81 Highest 47.13 65.49 93.1 (91.6-94.6) .001 NRI Index, % IDI Index, % 12.8 b 27.7 b 15.0 (14.6-15.3) .001

Risk model without NIHSS score Risk model with NIHSS score Difference (95% CI) P value

Abbreviations: IDI, integrated discrimination improvement; NIHSS, National Institutes of Health Stroke Scale; NRI, net reclassification improvement. a Age, sex, medical history of prior stroke or transient ischemic attack, and 84 condition categories listed in eTable 2 are adjusted for in the modeling. b Discrimination slope defined as difference of estimated mean probability for events and estimated mean probability for nonevents.

Table 3. Hospital Ranking Agreement Based on 30-Day Mortality Risk Models Without and With Adjustment for NIHSS Score
Rank Based on Model Without NIHSS Score Top 20% Middle 60% Bottom 20% Total Rank Based on Model Without NIHSS Score Top 5% Middle 90% Bottom 5% Total Rank Based on Model Without NIHSS Score Better than expected a As expected a Worse than expected a Total Rank Based on Model With NIHSS Score Top 20% Middle 60% Bottom 20% 110 55 1 55 367 47 1 47 109 156 469 157 Top 5% Middle 90% Bottom 5% 23 16 0 16 668 19 0 19 21 39 703 40 As Expected a Worse Than Expected a Better Than Expected a 13 9 0 15 713 6 0 15 11 28 737 17 Total 156 469 157 782 39 703 40 782 22 734 26 782

Abbreviation: NIHSS, National Institutes of Health Stroke Scale. a Hospitals with the 95% credible intervals of the estimated random intercepts not covering the null point are considered to have performance that is significantly better or worse than the average hospital. These categories are analogous to the portions of hospitals identified in Hospital Compare as having better than, no different than, or worse than expected 30-day risk-standardized mortality rates.

2012 American Medical Association. All rights reserved.

JAMA, July 18, 2012Vol 308, No. 3 261

Downloaded From: http://jama.jamanetwork.com/ on 05/07/2013

HOSPITAL PERFORMANCE IN ACUTE ISCHEMIC STROKE

claims data, we found that acute ischemic stroke risk-adjustment models varied in their ability to accurately predict hospital 30-day mortality risk depending on whether or not they adjusted for initial stroke severity. Beyond substantial differences in model discrimination and calibration, a hospitals variance from its expected, riskstandardized 30-day mortality outcomes, relative to its peers, frequently changed based on which riskadjustment model was applied. More than 40% of hospitals identified in the top or bottom 5% of hospital riskadjusted mortality would have been reclassified into the middle mortality range using a model adjusting for NIHSS score compared with a model without NIHSS score adjustment. Similarly, when considering the top 20% and bottom 20% ranked hospitals, close to one-third of hospitals would have been reclassified. These findings highlight the importance of including a valid specific measure of stroke severity in hospital risk models for mortality after acute ischemic stroke for Medicare beneficiaries. Furthermore, this study suggests that inclusion of admission stroke severity may be essential for optimal ranking of hospital with respect to 30day mortality. The increasing use of 30-day mortality rates to assess hospital quality has intensified their importance. This outcome measure has the potential to give both patients and clinicians important feedback concerning a hospitals quality of care.1 Because hospitals may treat patients with differing case mix and illness severity, adequate risk adjustment is essential.11 Outcome measures that do not adequately risk-adjust may systematically favor hospitals that care for patients with less severe illness, regardless of whether these hospitals approaches to patient management contributed to better or worse patient outcomes.11 Whether or not clinicians agree with the basic tenets of hospital comparisons, nearly all agree that, if outcomes are to be compared, such comparisons should be done only after appropriately risk-adjusting the re262 JAMA, July 18, 2012Vol 308, No. 3

sults.11,24,25 The primary concern with outcome measures that do not adequately discriminate mortality risk is that hospital rankings based on these models may distort hospital profiling and quality assessment.24,25 Thus, a key question confronting clinicians, hospitals, payers, and policy makers is whether current and emerging measures that assess 30-day mortality are adequate for public reporting and use for rewarding and penalizing hospitals in value-based purchasing.7 It has been previously reported that risk-standardization models for nonstroke conditions that adjust for demographics and comorbid conditions based on administrative claims data are sufficient for public reporting, despite not adjusting for indicators of disease severity, laboratory test results, and diagnostic studies at time of presentation.3-6 The CMS is now considering an outcome measure of 30-day mortality for acute ischemic stroke.26 In this GWTG-Stroke analysis, we demonstrate that a hospital risk model based on claims data alone without adjustment for stroke severity has substantially worse discrimination compared with a model that adjusts for stroke severity using the NIHSS score. This study also suggests that the ranking of hospitals could be confounded if the risk-adjustment models do not take into account the severity of the acute ischemic stroke, demographics, and other factors present at the time of acute ischemic stroke presentation. For conditions such as heart failure, acute myocardial infarction, and pneumonia, it is believed that claims-only models for 30-day mortality can adequately discriminate mortality risk at the hospital level for Medicare patients.3-5 However, in contrast to these conditions, these findings suggest that this is not the case for hospital risk models for acute ischemic stroke. It logically follows that a measure of stroke severity would be essential for optimal discrimination of mortality risk, as NIHSS score is well documented to be a key risk determinant in acute ischemic stroke.12-15 Prior patient-level

analyses have shown that NIHSS score was the strongest predictive variable for in-hospital and 30-day mortality and substantially improved the performance of a model based on clinical variables without stroke severity.14,15 Our findings are also consistent with, and substantially extend, prior smaller and regionally restricted analyses. An analysis of 2 administrative data prediction models used to assess New York hospitals found that, in the absence of a measure of index stroke severity, the mortality prediction models were noncongruent and yielded hospital rankings that agreed only slightly more often than expected by chance.24 As public reporting and value-based purchasing policies increase for outcome measures, it is important to recognize the effect that using models with less than ideal discrimination and calibration has on the ranking of hospitals and the lack of correlation among ranking by models that do and do not adjust for critical risk determinants.7 To truly identify highest- and lowest-performing hospitals for acute ischemic stroke outcomes, models that adjust for stroke severity will likely be needed. It is also important to carefully consider that rewarding or punishing hospitals on the basis of a risk model that does not account for stroke severity may misalign incentives, as many hospitals identified as performing better than or worse than expected were likely misclassified. As a consequence, hospitals may consider turning away patients with more severe strokes or transferring them to other hospitals after emergency department assessment to avoid being misclassified as having higher risk-standardized mortality. A 30-day mortality model for acute ischemic stroke without adjustment for stroke severity provides lesser discrimination, produces different rankings of hospital performance, and may be biased in favor of hospitals treating less severe strokes than a model with adjustment for stroke severity. Although the present analysis supports collection and adjustment of stroke severity for hospital 30-day mor-

2012 American Medical Association. All rights reserved.

Downloaded From: http://jama.jamanetwork.com/ on 05/07/2013

HOSPITAL PERFORMANCE IN ACUTE ISCHEMIC STROKE

tality, NIHSS score was recorded for 50.7% of hospitalized acute ischemic stroke patients during the study period. The time and expertise needed to perform even a short standardized stroke severity assessment and ensuring these data are accurately abstracted and entered into the Hospital Compare data collection system are important barriers that will need to be overcome.6 The increasing use of quality measures reporting the portion of acute ischemic stroke patients with NIHSS scores recorded may facilitate this process.27 The Hospital Alliance and CMS should consider requiring the collection and reporting of the initial NIHSS score for all hospitalized acute ischemic stroke patients prior to implementation of a 30-day mortality riskstandardized model. Several limitations in our study should be noted. First, the patient population studied is Medicare fee-forservice beneficiaries enrolled in GWTGStroke and may not be representative of all patients hospitalized with acute ischemic stroke. However, the differences between patients with and without NIHSS score recorded were small, and other recent analyses suggest that fee-for-service Medicare patients enrolled in GWTG-Stroke are representative of the entire US population of feefor-service Medicare patients.20 Second, this study includes only patients in feefor-service Medicare and thus does not include patients who were enrolled in managed care, uninsured individuals, and patients younger than 65 years of age. Third, variables used for risk adjustment were drawn from claims data and are dependent on their accuracy, as is any model derived from Medicare administrative claims databases. Fourth, these risk models did not adjust for therapies provided, such as tissue plasminogen activator. Fifth, as is the convention for CMS mortality models, the primary outcome was allcause mortality rather than causespecific mortality (eg, stroke-specific mortality).3-5 We did not assess 30day rehospitalization, health-related quality of life, functional recovery, pa-

tient satisfaction, and other clinical outcomes that may be of interest for hospital outcome measures, and the need for adjustment for stroke severity for these other outcome measures requires further study. Sixth, the hospital ranking methodologies applied did not use shrinkage analysis and may have differed in other ways than those currently applied by CMS.3-5 Whether these differences would affect these findings needs additional analysis, but prior studies have suggested selection of risk adjustors had far more influence on ranking profiles than choice of statistical strategies.28 CONCLUSIONS Adding stroke severity assessed with the NIHSS score to a hospital 30-day mortality model based on claims data for Medicare beneficiaries with acute ischemic stroke is associated with substantial improvement in model discrimination and changes in mortality performance ranking for a considerable proportion of hospitals. These findings suggest that it may be critical to collect and include stroke severity for optimal hospital risk adjustment of 30day mortality for Medicare beneficiaries with acute ischemic stroke.
Author Contributions: Dr Fonarow had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Fonarow, Saver, Reeves, Kleindorfer, Schwamm. Acquisition of data: Fonarow, Saver, Peterson. Analysis and interpretation of data: Fonarow, Pan, Saver, Smith, Reeves, Broderick, Kleindorfer, Sacco, Olson, Hernandez, Peterson, Schwamm. Drafting of the manuscript: Fonarow. Critical revision of the manuscript for important intellectual content: Fonarow, Pan, Saver, Smith, Reeves, Broderick, Kleindorfer, Sacco, Olson, Hernandez, Peterson, Schwamm. Statistical analysis: Pan, Saver, Peterson. Obtained funding: Fonarow. Administrative, technical, or material support: Fonarow, Saver, Kleindorfer, Olson, Hernandez, Schwamm. Study supervision: Fonarow, Olson, Schwamm. Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Fonarow reported serving as a member of the Get With The Guidelines (GWTG) Steering Committee and receiving research support from the National Institutes of Health and is an employee of the University of California, which holds a patent on retriever devices for stroke. Dr Pan is a member of the Duke Clinical Research Institute (DCRI), which serves as the American Heart Association (AHA) GWTG data coordinating center. Dr

Saver reported serving as a member of the GWTG Science Subcommittee and as a scientific consultant regarding trial design and conduct to Covidien, CoAxia, Talacris, Brainsgate, Sygnis, and Ev3 and is an employee of the University of California, which holds a patent on retriever devices for stroke. Dr Smith reported having served on an advisory board to Genentech and is on the data safety and monitoring board for the MR Witness trial. Dr Reeves reported receiving salary support from the Michigan Stroke Registry and serving as a member of the AHA GWTG Quality Improvement Subcommittee. Dr Broderick reported receiving funding from the National Institute of Neurological Disorders and Stroke (NINDS) for multiple ongoing trials; receiving study medications from Genentech for 2 ongoing NINDS studies and having received payment as a consultant to Genentech; and having been reimbursed for travel to meetings by Genentech. Dr Sacco is immediate past president of the AHA. Dr Olson is a member of the DCRI, which serves as the AHA GWTG data coordinating center, and reported serving as a consultant to Bristol Myers Squibb/Sanofi. Dr Hernandez is a member of the DCRI, which serves as the AHA GWTG data coordinating center, and reported being a recipient of an AHA Pharmaceutical Roundtable grant and having received research support from Johnson & Johnson and Amylin. Dr Peterson reported serving as principal investigator of the Data Analytic Center for AHA GWTG; reported receiving research grants from Johnson & Johnson, Eli Lilly, and Janssen Pharmaceuticals; and reported serving as a consultant to Boehringer Ingelheim, Johnson & Johnson, Medscape, Merck, Novartis, Ortho-McNeil-Janssen, Pfizer, Westat, the Cardiovascular Research Foundation, WebMD, and United Healthcare. Dr Schwamm reported serving as chair of the AHA GWTG Steering Committee and as a consultant to the Massachusetts Department of Public Health. No other disclosures were reported. Funding/Support: The GWTG-Stroke program is provided by the AHA/American Stroke Association. The GWTG-Stroke program is currently supported in part by a charitable contribution from Janssen Pharmaceutical Companies of Johnson & Johnson. GWTGStroke has been funded in the past through support from Boehringer-Ingelheim, Merck, Bristol-Myers Squib/Sanofi Pharmaceutical Partnership, and the AHA Pharmaceutical Roundtable. Role of the Sponsor: The industry sponsors of GWTGStroke had no role in the design and conduct of the study; in the collection, analysis, and interpretation of the data; or in the preparation, review, or approval of the manuscript. Disclaimer: Dr Peterson, Contributing Editor for JAMA, was not involved in the editorial review of or the decision to publish this article. Online-Only Material: The eMethods and eTables are available at http://www.jama.com.

REFERENCES 1. Jha AK, Li Z, Orav EJ, Epstein AM. Care in US hospitals: the Hospital Quality Alliance program. N Engl J Med. 2005;353(3):265-274. 2. Krumholz HM, Normand SL, Spertus JA, Shahian DM, Bradley EH. Measuring performance for treating heart attacks and heart failure: the case for outcomes measurement. Health Aff (Millwood). 2007; 26(1):75-85. 3. Krumholz HM, Normand SL. Public reporting of 30day mortality for patients hospitalized with acute myocardial infarction and heart failure. Circulation. 2008; 118(13):1394-1397. 4. Krumholz HM, Wang Y, Mattera JA, et al. An administrative claims model suitable for profiling hospital performance based on 30-day mortality rates among patients with heart failure. Circulation. 2006;113 (13):1693-1701. JAMA, July 18, 2012Vol 308, No. 3 263

2012 American Medical Association. All rights reserved.

Downloaded From: http://jama.jamanetwork.com/ on 05/07/2013

HOSPITAL PERFORMANCE IN ACUTE ISCHEMIC STROKE


5. Lindenauer PK, Bernheim SM, Grady JN, et al. The performance of US hospitals as reflected in riskstandardized 30-day mortality and readmission rates for Medicare beneficiaries with pneumonia. J Hosp Med. 2010;5(6):E12-E18. 6. Hospital Compare. Department of Health and Human Services. http://www.hospitalcompare.hhs .gov. Accessed January 24, 2012. 7. Centers for Medicare & Medicaid Services (CMS), HHS. Medicare program; hospital inpatient valuebased purchasing program: final rule. Fed Regist. 2011; 76(88):26490-26547. 8. Roger VL, Go AS, Lloyd-Jones DM, et al; American Heart Association Statistics Committee and Stroke Statistics Subcommittee. Heart disease and stroke statistics: 2011 update: a report from the American Heart Association. Circulation. 2011;123(4): e18-e209. 9. Lichtman JH, Leifheit-Limson EC, Jones SB, et al. Predictors of hospital readmission after stroke: a systematic review. Stroke. 2010;41(11):2525-2533. 10. Lichtman JH, Jones SB, Wang Y, Watanabe E, Leifheit-Limson E, Goldstein LB. Outcomes after ischemic stroke for hospitals with and without Joint Commissioncertified primary stroke centers. Neurology. 2011;76(23):1976-1982. 11. Krumholz HM, Brindis RG, Brush JE, et al; American Heart Association; Quality of Care and Outcomes Research Interdisciplinary Writing Group; Council on Epidemiology and Prevention; Stroke Council; American College of Cardiology Foundation; Endorsed by the American College of Cardiology Foundation. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Circulation. 2006;113(3):456-462. 12. Weimar C, Ko nig IR, Kraywinkel K, Ziegler A, Diener HC; German Stroke Study Collaboration. Age and National Institutes of Health Stroke Scale score within 6 hours after onset are accurate predictors of outcome after cerebral ischemia: development and external validation of prognostic models. Stroke. 2004; 35(1):158-162. 13. Nedeltchev K, Renz N, Karameshev A, et al. Predictors of early mortality after acute ischaemic stroke. Swiss Med Wkly. 2010;140(17-18):254-259. 14. Smith EE, Shobha N, Dai D, et al. Risk score for in-hospital ischemic stroke mortality derived and validated within the Get With The Guidelines Stroke Program. Circulation. 2010;122(15):14961504. 15. Fonarow GF, Saver JL, Smith EE, et al. Relationship of National Institutes of Health Stroke Scale to 30-day mortality in Medicare beneficiaries with acute ischemic stroke. J Am Heart Assoc . 2012;1:4250. doi:10.1161/JAHA.111.000034. 16. Fonarow GC, Reeves MJ, Zhao X, et al; Get With The GuidelinesStroke Steering Committee and Investigators. Age-related differences in characteristics, performance measures, treatment trends, and outcomes in patients with ischemic stroke. Circulation. 2010;121(7):879-891. 17. Fonarow GC, Smith EE, Reeves MJ, et al; Get With The Guidelines Steering Committee and Hospitals. Hospital-level variation in mortality and rehospitalization for Medicare beneficiaries with acute ischemic stroke. Stroke. 2011;42(1):159-166. 18. American Hospital Association. American Hospital Association Hospital Statistics, 2009 Edition. Chicago, IL: American Hospital Association; 2009. 19. Hammill BG, Hernandez AF, Peterson ED, Fonarow GC, Schulman KA, Curtis LH. Linking inpatient clinical registry data to Medicare claims data using indirect identifiers. Am Heart J . 2009;157(6):9951000. 20. Reeves MJ, Fonarow GC, Smith EE, et al. Representativeness of the Get With The GuidelinesStroke Registry: comparison of patient and hospital characteristics among Medicare beneficiaries hospitalized with ischemic stroke. Stroke. 2012;43(1):44-49. 21. Adams HP Jr, Davis PH, Leira EC, et al. Baseline NIH Stroke Scale score strongly predicts outcome after stroke: a report of the Trial of Org 10172 in Acute Stroke Treatment (TOAST). Neurology. 1999;53 (1):126-131. 22. Pencina MJ, DAgostino RB Sr, DAgostino RB Jr, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Stat Med. 2008;27(2): 157-172. 23. Cook NR, Ridker PM. Advances in measuring the effect of individual predictors of cardiovascular risk: the role of reclassification measures. Ann Intern Med. 2009;150(11):795-802. 24. Kelly A, Thompson JP, Tuttle D, Benesch C, Holloway RG. Public reporting of quality data for stroke: is it measuring quality? Stroke. 2008;39(12):33673371. 25. Hammill BG, Curtis LH, Fonarow GC, et al. Incremental value of clinical data beyond claims data in predicting 30-day outcomes after heart failure hospitalization. Circ Cardiovasc Qual Outcomes. 2011; 4(1):60-67. 26. Centers for Medicare and Medicaid Services (CMS), HHS. Medicare program; hospital inpatient prospective payment systems for acute care hospitals and the long-term care hospital prospective payment system and FY 2012 rates; hospitals FTE resident caps for graduate medical education payment: final rules. Fed Regist. 2011;76(160):51476-51846. 27. Leifer D, Bravata DM, Connors JJ III, et al; American Heart Association Special Writing Group of the Stroke Council; Atherosclerotic Peripheral Vascular Disease Working Group; Council on Cardiovascular Surgery and Anesthesia; Council on Cardiovascular Nursing. Metrics for measuring quality of care in comprehensive stroke centers: detailed follow-up to Brain Attack Coalition comprehensive stroke center recommendations: a statement for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2011;42(3):849-877. 28. Huang IC, Dominici F, Frangakis C, Diette GB, Damberg CL, Wu AW. Is risk-adjustor selection more important than statistical approach for provider profiling? asthma as an example. Med Decis Making. 2005;25(1):20-34.

264

JAMA, July 18, 2012Vol 308, No. 3

2012 American Medical Association. All rights reserved.

Downloaded From: http://jama.jamanetwork.com/ on 05/07/2013

You might also like