Which Hospitals Have Significantly Better or Worse Than Expected Mortality Rates for Acute Myocardial Infarction Patients?
Improved Risk Adjustment With Present-at-Admission Diagnoses
Background— Public reports that compare hospital mortality rates for patients with acute myocardial infarction are commonly used strategies for improving the quality of care delivered to these patients. Fair comparisons of hospital mortality rates require thorough adjustments for differences among patients in baseline mortality risk. This study examines the effect on hospital mortality rate comparisons of improved risk adjustment methods using diagnoses reported as present-at-admission.
Methods and Results— Logistic regression models and related methods originally used by California to compare hospital mortality rates for patients with acute myocardial infarction are replicated. These results are contrasted with results obtained for the same hospitals by patient-level mortality risk adjustment models using present-at-admission diagnoses, using 3 statistical methods of identifying hospitals with higher or lower than expected mortality: indirect standardization, adjusted odds ratios, and hierarchical models. Models using present-at-admission diagnoses identified substantially fewer hospitals as outliers than did California model A for each of the 3 statistical methods considered.
Conclusions— Large improvements in statistical performance can be achieved with the use of present-at-admission diagnoses to characterize baseline mortality risk. These improvements are important because models with better statistical performance identify different hospitals as having better or worse than expected mortality.
Received April 30, 2007; accepted September 21, 2007.
Hospital mortality rates for patients with acute myocardial infarction (AMI) are commonly used indicators of the quality of care provided by hospitals. State government agencies in California,1 New York,2 and Pennsylvania3 publicly report which hospitals have higher or lower than expected mortality rates for AMI patients. At the national level, risk-adjusted mortality outcomes for AMI patients are used to determine part of the financial reimbursement received by hospitals participating in the Medicare pay-for-performance demonstration project.4
Editorial p 2897
Clinical Perspective p 2968
AMI mortality risk adjustment is typically accomplished at the state and national population level with the use of existing hospital administrative data resources because of the expense of abstracting additional information from patient records.5 Fair comparison of AMI mortality rates across hospitals requires effective adjustments for differences among patients in their baseline mortality risk.6 Toward this end, most existing hospital administrative data will soon be supplemented by information distinguishing secondary diagnoses that are present at admission from those that are complications or adverse events that occur during the hospital stay. The Deficit Reduction Act of 2005 required that Medicare hospital administrative data be modified to report present-at-admission information.7 National standards for uniform hospital claims reporting now include data elements for identifying which diagnoses are present at admission.8
Hospitals in California have reported which diagnoses are present at admission since 1996. Prior research demonstrates that an AMI mortality risk adjustment model using present-at-admission diagnoses obtains substantially higher statistical performance than California’s existing AMI mortality risk adjustment method, which was developed before information about which diagnoses were present at admission was available.9 Adjustments for present-at-admission diagnoses increased discrimination from 0.76 to 0.86, as measured by the c statistic, and increased the proportion of the total log-likelihood explained from 13% to 30%, as measured by the generalized R2 statistic. Mortality risk adjustment models using present-at-admission diagnoses have also been demonstrated to obtain large increases in statistical performance in other study populations.10–12
Present-at-admission diagnoses may allow substantial improvements in the validity of hospital mortality comparisons by eliminating diagnoses representing complications of care from the risk adjustment algorithms.13 Several prior studies indicate that identifying which hospitals have higher or lower than expected AMI mortality can depend on the risk adjustment method used.14–16 However, no prior studies have examined how the availability of present-at-admission diagnoses influences risk-adjusted comparisons of hospital AMI mortality rates.
In this study, we examine how improvements in patient-level adjustments for AMI mortality risk with the use of present-at-admission diagnoses affect hospital level comparisons, using 3 different statistical methods for comparing hospitals. First, we reproduce the methods used by the state of California to calculate publicly reported hospital standardized mortality rates (SMRs) for AMI and compare results obtained for the same hospitals by patient-level adjustments using present-at-admission diagnoses. Second, we compare the adjusted odds of death associated with specific hospitals calculated using the replicated California model to adjust for differences in patient mortality risk and compare those with the adjusted odds of death for the same hospital calculated using the present-at-admission model. Finally, we compare hospital SMRs obtained from hierarchical generalized linear models formulated using present-at-admission diagnoses with SMRs obtained from a similar hierarchical model using the replicated California AMI mortality risk adjustment model.
The study population includes patients hospitalized in California with AMI from January 1996 through November 1998. This population is a replica of the study population used to calculate publicly reported hospital SMRs for AMI patients by the state of California. The replica was developed from California hospital discharge abstract data available for public use, with patient selection criteria matched to the inclusion and exclusion criteria described in the California reports.17,18 This research was reviewed and determined to be exempt human subject research by the Human Investigations Committee of the University of Virginia.
Although the original criteria were duplicated as closely as possible, some criteria were not completely matched because of required data elements that were absent, abbreviated, or encrypted in the public use data files. Primary inclusion criteria were principal diagnoses for AMI and patient discharge dates. Primary exclusion criteria were patient age <18 years, admission from long-term care facilities and other selected sources of admission, rehospitalization, and transfer from another acute care hospital. In the original California study, a few hospitals were excluded on the basis of potential undercoding or overcoding of selected risk factors used in the California AMI mortality risk adjustment models. No hospitals were excluded from the replicated study population because we sought to develop a new mortality risk adjustment model using mortality risk factors defined by present-at-admission diagnoses.
California Hospital Outcomes Project Model
The original California report used 2 mortality risk adjustment models, model A and model B, to compare hospital mortality rates. Model A includes adjustments for demographic characteristics, descriptions of the infarct location, and selected comorbidities that substantially increased inpatient mortality risk and that were unlikely to be complications. Model B includes covariates from model A plus additional covariates for shock, hypertension, pulmonary edema, complete atrioventricular block, pleural effusion, urinary tract infection, syncope, acidosis, alkalosis, sepsis, paroxysmal ventricular tachycardia, hyponatremia or hyposmolality, hypernatremia or hyperosmolality, gastrointestinal hemorrhage, pneumonia, aspiration pneumonitis, and unstable angina. Model A is described as a more conservative adjustment for baseline characteristics in the original California study reports because the added covariates included in model B can represent complications of care instead of conditions present at admission for some patients. Prior research indicates that the conditions included in California model A are reliably present at admission but that the additional covariates included by model B often reflect conditions that occurred after the patient’s admission.9,19 We limited our analysis to results obtained by model A.
Present-at-Admission Diagnoses Model
We compared results obtained by California model A with results obtained by a mortality risk adjustment model that uses International Classification of Diseases, Ninth Revision, Clinical Modification secondary diagnosis codes reported to be present at admission for each discharged patient to adjust for the effects of comorbid disease.9 The model includes adjustments for 233 categories of comorbid disease measured by present-at-admission diagnoses. The model also includes adjustments for the location of the myocardial infarction, patient race, Hispanic ethnicity, gender, type of insurance, whether the hospitalization was an emergency admission, whether the patient had been transferred from another acute care hospital, and the patient’s age in years. Complete details about the replica study population, logistic regression model estimation process, model covariates, parameter estimates, and validation of statistical performance for both California model A and the present-at-admission model are available in previously reported results.9
Three Methods for Comparing Hospitals
Patient-level mortality risk estimates from California model A and from the present-at-admission model were used to calculate 3 commonly used methods for comparing hospital-level differences in mortality. First, SMRs were calculated for each hospital with the use of the replicated California model A and the present-at-admission model using indirect standardization. Multivariable logistic regression and the maximum likelihood method were used to estimate the adjusted risk of inpatient death for both models. Indirectly standardized SMRs for each hospital were obtained by calculating the ratio of the sum of the observed number of deaths over the sum of the model-estimated risk of death for each patient in the hospital, multiplied by the unadjusted statewide mortality rate for all hospitals. In the original California report, each hospital was classified as having a significantly better (or worse) than expected number of deaths on the basis of 98% CIs calculated for each hospital’s standardized mortality ratio with the use of the normal approximation, supplemented by a recursive algorithm used to calculate probabilities of events in hospitals with <16 observed deaths.20,21 We reproduced these CI calculations for SMRs obtained by the replicated California model A and for SMRs obtained by the present-at-admission model.
A simpler approach often used for comparing hospitals is to calculate adjusted odds of death for each hospital by including a categorical variable with values for individual hospitals within the patient-level mortality risk adjustment model.22 We used this second method to calculate mortality risk–adjusted odds of death for individual hospitals by adding a predictor variable identifying each hospital to the replicated California model A, and similarly by adding the same predictor variable to the present-at-admission model. The adjusted odds of death for each hospital was calculated in reference to patient outcomes observed for a single hospital with a large number of patients and an SMR approximately equal to 1.0. Wald χ2 test statistics were used to calculate 98% CIs for the adjusted odds of death associated with each hospital obtained by adding a hospital categorical predictor to both risk adjustment models.
Both of these methods for indexing hospitals overlook the potential for clustered observations within hospitals and can result in “overdispersed” models that overestimate the amount of variation between hospitals.22,23 For this reason, an expert panel of the American Heart Association recommends hierarchical models as the standard for publicly reported comparisons of health outcomes.24 To address this potential, we formulated both California model A and the present-at-admission model as hierarchical models that included the fixed effects of the original patient-level covariates plus a series of hospital-specific intercepts known as random effects. In the random-effects formulation of the hierarchical general linear model, the fixed effects are equal across hospitals, representing the assumption that the effects of age, comorbidity, and other patient characteristics are the same across hospitals. Differences in mortality across hospitals are accounted for by variability in the random effects, controlling for the fixed effects of patient-level predictors of mortality.
SMRs were obtained for each hospital from the hierarchical models by calculating the ratio of the number of predicted deaths among the hospital’s patients over the number of deaths expected for those patients in a hospital with average performance, multiplied by the statewide unadjusted mortality rate.25,26 We used bootstrap simulation to calculate an empirical distribution for each hospital’s standardized mortality ratio.27 Three hundred samples, each equal to the size of the original study population, were drawn at random with replacement. Observations in each sample were included with probability proportional to hospital size so that hospitals entered the sampling algorithm in the form of stratum weights that modified the selection probability for individual cases drawn from the population for each sample.
The results of the bootstrap simulation were used to calculate 98% CIs for the hierarchical SMRs. Hospitals with significantly worse than expected mortality were identified by determining whether the 99th percentile of their simulated hierarchical SMR distribution was below the unadjusted statewide rate, whereas hospitals whose first percentile was above the unadjusted statewide rate were identified as having significantly better than expected mortality. These standards of comparison were selected to match the 98% CIs used to identify hospitals in the original California report. Additional details about the calculation methods are presented in the appendix in the online-only Data Supplement.
Paired Comparisons of Hospital Indices
Hospital-specific results from California model A and from the present-at-admission model were compared for each of the 3 methods by cross-tabulating the number of hospitals identified as having better mortality, worse mortality, or no significant difference in mortality by the 2 models. The amount of agreement between the models was assessed with the κ statistic, which measures the proportion of observed to expected agreement.28 A κ statistic of 1.0 indicates perfect agreement, a κ statistic of 0.0 indicates the amount of agreement that would be expected by chance alone, and κ statistics in the range between 0.4 and 0.6 indicate moderate agreement.29 Paired differences between California model A and the present-at-admission model for individual hospitals were also assessed by calculating the correlation and plotting the association between indices and by plotting the density of the distributions of indices from each model at the same scale. We also calculated the correlation and plotted the association and density of the random effects obtained from the hierarchical models estimated using California model A with random effects obtained using the present-at-admission model.
The authors had full access to the data and take responsibility for its integrity. All authors have read and agree to the manuscript as written.
We identified 120 706 patients with AMI discharged from 416 California hospitals during the period from January 1996 through November 1998 using the patient selection criteria reported in the original California reports. The number of patients in the replicated study population total is slightly smaller than the total of 128 509 patients with AMI included in the original California report. There were 12 178 in-hospital deaths in the reproduced study population (10.1%) compared with the 12 799 identified in the original study (10.0%). Mortality risk adjustment with the use of present-at-admission diagnoses yielded a substantially different set of hospitals with better or worse than expected mortality for each of the 3 methods of comparing hospital differences that we considered.
Table 1 lists cross-tabulation results comparing hospitals identified as having better or worse than expected SMRs calculated as in the original California report using the logistic models with fixed effects and with SMR differences distinguished at a probability threshold of 0.01. More than one half of the hospitals identified as having worse than expected SMRs by California model A were not identified by the model using present-at-admission diagnoses (21 of 36 [58%]). California model A found more hospitals with significant differences (55 of 416 [13%]) than the present-at-admission diagnoses model (35 of 416 [8%]), with moderate agreement overall about which specific hospitals were better (or worse), as reflected by the κ statistic of 0.42. Figure 1A demonstrates positive but substantially dispersed correlation (Pearson r=0.87) between SMRs from the fixed-effects models. Figure 1B shows that the overall density of the present-at-admission model SMRs was similar to that of the SMRs obtained by California model A.
Table 2 lists cross-tabulation results comparing hospitals identified as having better or worse than expected odds of death by the logistic models at a probability threshold of 0.01. More than one half of the hospitals identified as having worse than expected adjusted odds of death by California model A were not identified by the model using present-at-admission diagnoses (17 of 26 [65%]). California model A also found more hospitals with significantly different adjusted odds of death (41 of 415 [10%]) than the present-at-admission diagnoses model (28 of 415 [7%]). Agreement about which specific hospitals were better (or worse) between models was moderate, as reflected by the κ statistic of 0.40. Figure 1C demonstrates that the association between the adjusted odds of death is positive but substantially dispersed across the range of values (Pearson r=0.89). Figure 1D shows that the overall distributions of these quantities were similar.
Table 3 lists cross-tabulation results comparing hospitals identified as having better or worse than expected SMRs by the hierarchical models with random effects. Both of the hierarchical models identified fewer hospitals as significantly different from the corresponding logistic models with fixed effects. One half of the hospitals identified as having worse than expected SMRs by the hierarchical formulation of California model A were not identified by the hierarchical formulation of the present-at-admission diagnoses model (3 of 6 [50%]). Agreement between the hierarchical models was similar to results for the other paired results, with κ of 0.43 indicating moderate agreement about which hospitals had significantly different SMRs.
Complete sets of 300 samples from the bootstrap simulation were obtained for 322 hospitals with California model A (78%) and for 327 hospitals with the present-at-admission diagnoses model (80%). CIs were not calculated for hospitals with incomplete samples. Hospitals with incomplete bootstrap results had very few patients. For California model A, 94 hospitals had incomplete bootstrap samples, and collectively they accounted for only 1648 patients or 1.4% of the total study population. For the present-at-admission diagnoses model, an additional 5 hospitals had incomplete samples, also with very few patients. For both hierarchical models, hospitals with incomplete samples were defined to have no significant difference.
Figure 2A displays the association between the random-effects model SMRs, which is positive and much more highly concentrated than the SMRs produced by the fixed-effects models (Pearson r=0.73). Figure 2B displays the density of the random-effects model SMRs, which for the present-at-admission model ranges from 0.09 to 0.11. A broader distribution of SMRs was obtained for California model A, which yielded SMRs from 0.08 to 0.12. Figure 2C demonstrates that the random effects from the 2 models are positively correlated and that the present-at-admission model obtains characteristically smaller estimates of the random effects for the same hospitals than those estimated by California model A (Pearson r=0.74). Figure 2D displays the density of the estimated random effects, demonstrating that the overall distribution from the present-at-admission model is more compact than the distribution resulting from California model A.
Table 4 presents the distribution of the difference between California model A and the present-at-admission model fixed-effects SMRs and random-effects SMRs calculated for each hospital. Results for the odds ratios are not reported because the odds ratios and fixed-effects SMRs were obtained from models that are identical, except for an additional covariate that identifies each hospital. Compared with fixed-effects model SMRs from California model A, SMRs from the present-at-admission model were higher by ≥2 percentage points in 10% of the hospitals (10th percentile) and were lower by ≥2 percentage points in 15% of the hospitals (85th percentile). The magnitude of the difference in random-effects model SMRs was highly compressed by the hierarchical models.
Figure 3 plots results for 46 hospitals with significantly worse than expected mortality identified by ≥1 models (P<0.01). For most of these hospitals, SMRs and odds ratios calculated with California model A were higher than those calculated with the present-at-admission model. Significantly worse than expected mortality was identified for 36 hospitals by fixed-effects SMRs from California model A. However, only 15 of the 36 hospitals were identified as having significantly worse than expected mortality by the present-at-admission model. There were 26 hospitals with worse than expected mortality identified by fixed-effects odds ratios from California model A. Only 9 of these 26 hospitals were identified as having significantly worse than expected mortality by the present-at-admission model. There were 6 hospitals with significantly worse than expected mortality identified by California model A using random-effects model SMRs, only 3 of which were also identified as significantly worse than expected by the present-at-admission model. Only 2 hospitals were identified as having significantly worse than expected mortality in every model formulation considered.
This study compares 2 patient-level mortality risk adjustment models using 3 methods for scoring hospital mortality rate differences: fixed-effects model SMRs, fixed-effects model–adjusted odds ratios, and hierarchical random-effects model SMRs. For each comparison, we found that patient-level mortality risk adjustment models using information about which diagnoses were present at admission obtained meaningfully different results about which hospitals have better or worse than expected mortality.
The present-at-admission model produced fixed-effects SMRs that differed from those produced by California model A by ≥2 percentage points in 25% of the hospitals. This scale of change is large. Given the average observed mortality rate of 10%, a difference of 2 percentage points in the SMR represents a relative change of 20%.
In each of the 3 formulations, models using present-at-admission diagnoses identified fewer hospitals as outliers than California model A. This difference occurs because the models using present-at-admission diagnoses included many more covariates for comorbid disease, reflecting the cumulative effect of many small adjustments for comorbid disease not included by California model A. Interestingly, while the hierarchical formulations of both models identified substantially fewer hospitals as outliers, the present-at-admission model identified more hospitals as worse than expected than California model A.
Hospital-specific odds ratios obtained from multivariable logistic regression are readily interpreted quantities with standard tests of statistical significance. SMRs obtained by aggregating the results of the logistic regression models with fixed effects are also readily interpreted, although CI calculations require special attention to ratios for hospitals with few deaths. Hierarchical random-effects model SMRs are complex quantities but are recommended as the most accurate indices for comparing hospitals because the confounding influence of within–hospital level variation is separated from the between–hospital level variation component that is of primary interest and because SMRs for hospitals with low sample size shrink toward the population average.24 For all 3 methods used to compare hospitals, patient-level mortality risk adjustment models with better statistical performance produced substantially different results about which hospitals have better or worse than expected mortality.
We identified hospitals as having better or worse than expected mortality using the internal standard of the 98% CI to match the standard used in the original California report. However, an external standard reflecting public healthcare policy objectives would be more meaningful than the internal standard of the data distribution.23 With an internal standard, hospitals are identified as outliers without regard to the practical significance of the magnitude of the difference in mortality. Hierarchical SMRs calculated with the use of the present-at-admission model were significantly worse than expected for 9 hospitals, but none of these hospitals had mortality rates >11.1% in our study population with an overall mortality rate of 10.1%.
The replicated study population was created with the use of closely matched inclusion and exclusion criteria, but it is not an exact reproduction. Some data elements used in the original California study were unavailable in the public use data used for our study. For example, inpatient death was used in our study instead of the 30-day mortality outcome used in the original study. The total number of patients and proportion of in-hospital deaths in the replicated study population were nearly equivalent to those reported in the original study. Our replication of the California risk adjustment model obtained exactly the same level of discrimination (c statistic=0.77) that was reported in the original study, and parameter estimates calculated for risk factors in the California model closely matched those reported in the original study. Exactly matching the original study population was not essential because the purpose of the study was to compare results from 2 different models applied to the same patients and hospitals. We were unable to include data for subsequent years in our study. California public use files for later periods exclude unique patient identifiers that we used to link hospital records for patients who were transferred to another acute care facility.
Substantial improvements in statistical performance at the patient level are obtained by using present-at-admission data to increase the amount of comorbid disease included in adjustments made. The value of these improvements depends on the quality of the present-at-admission indicator reported for each diagnosis. To our knowledge, the validity of the present-at-admission indicator has not been directly assessed through reabstraction studies comparing hospital-reported data with original patient medical records. Our indirect assessment of the data quality indicates that International Classification of Diseases, Ninth Revision, Clinical Modification diagnoses independently selected as reliable indicators of comorbid disease are overwhelmingly reported as present at admission, and International Classification of Diseases, Ninth Revision, Clinical Modification diagnoses for surgical complications and iatrogenic events are rarely reported as present at admission.9 Another potential limitation is the presence of systematic differences between hospitals in how patients with AMI are identified, although the effect of this bias would be the same in the models being compared. Data validation studies conducted by California as part of their hospital AMI mortality reporting process indicate that the patient information reported in these records is of high quality.19
Patient comorbidity and AMI severity can be measured more comprehensively by abstracting additional clinical data from patient medical records. However, additional data may not substantially improve the level of statistical performance achieved by an AMI mortality risk adjustment model using present-at-admission diagnoses. A prior study compared California model A with AMI mortality risk adjustment models including adjustments for systolic and diastolic blood pressure, heart rate, blood urea nitrogen, creatinine, white blood cell count, and other clinical information used to characterize patient mortality risk in the Cooperative Cardiovascular Project, the Global Utilization of Streptokinase and Tissue Plasminogen Activator for Occluded Coronary Arteries Trial, and the Medicare Mortality Predictor System.16 This prior study applied each model to the same study population of Medicare patients and obtained validated c statistics of 0.71 for California model A, 0.74 for the Global Utilization of Streptokinase and Tissue Plasminogen Activator for Occluded Coronary Arteries Trial, 0.78 for the Cooperative Cardiovascular Project model, and 0.78 for the Medicare Mortality Predictor System model. In our study, we obtained a validated c statistic of 0.76 for California model A and 0.86 for the model using conditions present at admission. The scale of the increase in statistical performance obtained with the use of present-at-admission diagnoses in our study exceeds that obtained by models using clinical data abstracted from patient medical records in this prior study.
Mortality risk adjustment using disease registry data provides rich clinical detail, but the expense of collecting such data is a major barrier to its use for statewide or national population-based studies. Our research demonstrates that large improvements in statistical performance are obtained by using present-at-admission diagnoses to more comprehensively adjust for differences in baseline mortality risk and that these improvements matter because models with better statistical performance identify substantially different hospitals as having better or worse than expected mortality.
Hospitals will soon be reporting which diagnoses were present at admission for all Medicare patients. Our study suggests that this information can substantially increase the amount of comorbid disease accounted for, resulting in meaningful changes in which hospitals are identified as having better or worse than expected mortality by the Medicare pay-for-performance initiative. Recently reported research also indicates that substantial reductions in Medicare hospital payments could be obtained if diagnosis related groups were calculated including only diagnoses reported as present at admission.30
The large difference in the total number of hospitals identified as better or worse than expected across methods of comparing hospitals is especially interesting. Fewer hospitals were identified as outliers by the odds ratio approach than were identified by SMRs from the fixed-effects model. The random-effects model approach identified the fewest number of hospitals as outliers, primarily because ratios obtained for hospitals with the lowest patient volumes shrink close to the overall hospital mean. Prior studies have compared these different statistical methods of identifying outlier hospitals and have obtained similar results.31,32 These results lead us to wonder which types of hospitals perform better or worse than expected across different methods for comparing hospitals. We plan to conduct further analyses to examine how patient volume, teaching status, and other hospital-level characteristics influence whether or not hospitals are identified as having better or worse than expected AMI patient mortality and to explore how hospital-level characteristics are related to differences in the effects of adjustments for clustered observations.
We thank the state of California, Office of Statewide Health Planning and Development, for making their collection of hospital patient discharge data available for public use.
This project was supported by grants R01 HS10134 and K02 HS11419 from the Agency for Health Care Research and Quality.
California Office of Statewide Health Planning and Development, Healthcare Quality and Analysis Division. Report on heart attack outcomes in California 1996–1998. Available at: http://www.oshpd.ca.gov/HQAD/Outcomes/Studies/HeartAttacks/index.htm. Accessed October 21, 2006.
New York State Department of Health. Adult cardiac surgery in New York State 2000–2002. Available at: http://www.health.state.ny.us/nysdoh/heart/heart_disease.htm. Accessed October 21, 2006.
Pennsylvania Healthcare Cost Containment Council. Interactive hospital performance reports. Available at: http://www.phc4submit.org/hpr/. Accessed October 21, 2006.
US Department of Health and Human Services, Centers for Medicare and Medicaid Services. Hospital compare. Available at: http://www.hospitalcompare.hhs.gov/. Accessed October 21, 2006.
Iezzoni LI. Coded data from administrative sources: In: Risk Adjustment for Measuring Health Care Outcomes. Chicago, Ill: Health Administration Press; 2003; 83–138.
109th US Congress. S. 1932, Deficit Reduction Act of 2005, Sec 5001 (c):25–27. Government Printing Office. Available at: http://frwebgate.access. gpo.gov/cgi-bin/getdoc.cgi?dbname=109_cong_bills&docid=f:s1932enr.txt.pdf. Accessed October 21, 2006.
National Uniform Billing Committee. UB-04 data specification manual (beta3): American Hospital Association. Available at: http://www.nubc.org/public/whatsnew/POA.pdf. Accessed October 21, 2006.
Stukenborg GJ, Kilbridge KL, Wagner DP, Harrell FE, Oliver MN, Lyman JA, Einbinder JS, Connors AF. Present-at-admission diagnoses improve mortality risk adjustment and allow more accurate assessment of the relationship between volume of lung cancer operations and mortality risk. Surgery. 2005; 138: 498–507.
Krumholz HM, Chen J, Wang Y, Radford MJ, Chen Y-T, Marciniak TA. Comparing AMI mortality among hospitals in patients 65 years of age and older: evaluating methods of risk adjustment. Circulation. 1999; 99: 2986–2992.
California Office of Statewide Health Planning and Development, Healthcare Quality and Analysis Division. Report on heart attack outcomes in California 1996–1998, volume 2: technical guide. Sacramento, Calif: California Office of Statewide Health Planning and Development; February 2002. Available at: http://www.oshpd.ca.gov/HQAD/Outcomes/Studies/HeartAttacks/index.htm. Accessed October 21, 2006.
California Office of Statewide Health Planning and Development, Healthcare Quality and Analysis Division. Second report of the California Hospital Outcomes Project, volume 2: technical appendix. Sacramento, Calif: California Office of Statewide Health Planning and Development; May 1996. Available at: http://www.oshpd.ca.gov/HQAD/Outcomes/Studies/HeartAttacks/index.htm. Accessed October 21, 2006.
Romano PS, Remy LL, Luft HS. Second report of the California Hospital Outcomes Project, 1996: acute myocardial infarction, volume 2: technical appendix, chapter 014 (March 21, 1996). Center for Health Services Research in Primary Care. Reports prepared for the California Office of Statewide Health Planning and Development. Available at: http://repositories.cdlib.org/chsrpc/coshpd/chapter014. Accessed October 21, 2006.
Krumholz HM, Brindis RG, Brush JE, Cohen DJ, Epstein AJ, Furie K, Howard G, Peterson ED, Rathore SS, Smith SC Jr, Spertus JA, Wang Y, Normand S-LT. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council Endorsed by the American College of Cardiology Foundation. Circulation. 2006; 113: 456–462.
Krumholz HM, Wang Y, Mattera JA, Wang Y, Han LF, Ingber MJ, Roman S, Normand S-LT. An administrative claims model suitable for profiling hospital performance based on 30-day mortality rates among patients with an acute myocardial infarction. Circulation. 2006; 113: 1683–1692.
Efron B, Tibshirani R. An Introduction to the Bootstrap. New York, NY: Chapman and Hall; 1993.
Fleiss J. Statistical Methods for Rates and Proportions. 2nd ed. New York, NY: Wiley; 1981.
Hospital mortality rates for patients with acute myocardial infarction are publicly reported in California and in other states and are a component of the Medicare pay-for-performance project. These comparisons are based on patient information from hospital data collected for statewide or nationwide populations. Fair comparisons require effective adjustments for differences in baseline mortality risk among patients within hospitals. Toward this end, new national standards for uniform hospital administrative data require hospitals to designate which diagnoses are present-at-admission. This study evaluates how information about present-at-admission diagnoses, which has been collected since 1996 for California hospital patients, affects hospital mortality comparisons. We find that mortality risk adjustment models using diagnoses present-at-admission have much better explanatory performance and capacity to discriminate than other models that have been used to adjust for differences in patient-level mortality risk. Models using present-at-admission diagnoses identify fewer hospitals as outliers and obtain meaningfully different results about which hospitals have better or worse than expected mortality. Three statistical methods of identifying hospitals with higher or lower than expected mortality are considered: indirect standardization, adjusted odds ratios, and hierarchical models. These 3 methods also produced substantially different results about which hospitals have better or worse than expected mortality.
The online-only Data Supplement, consisting of an appendix and tables, is available with this article at http://circ.ahajournals.org/cgi/content/full/ CIRCULATIONAHA.107.712323/DC1.