Performance of Top-Ranked Heart Care Hospitals on Evidence-Based Process Measures
Background— Despite the increasing availability of evidence-based clinical performance measure data that compares the performances of US hospitals, the general public continues to rely on more popular resources such as the US News & World Report annual publication of “America’s Best Hospitals” for information on hospital quality. This study evaluated how well hospitals ranked on the US News & World Report list of top heart and heart surgery hospitals performed on acute myocardial infarction and heart failure measures derived from American College of Cardiology and American Heart Association clinical treatment guidelines.
Methods and Results— This study identified 774 hospitals, including 41 of the US News & World Report top 50 heart and heart surgery hospitals. To compare hospitals, 10 rate-based performance measures (6 addressing processes of acute myocardial infarction care and 4 addressing heart failure care), were aggregated into a cardiovascular composite measure. As a group, the US News & World Report hospitals performed statistically better than their peers (mean, 86% versus 83%; P<0.05). Individually, however, only 23 of the US News & World Report hospitals achieved statistically better-than-average performance compared with the population average, whereas 9 performed significantly worse (P<0.05). One hundred sixty-seven hospitals in this study routinely implemented evidenced-based heart care ≥90% of the time.
Conclusions— A number of the US News & World Report top hospitals fell short in regularly applying evidenced-based care for their heart patients. At the same time, many lesser known hospitals routinely provided cardiovascular care that was consistent with nationally established guidelines.
Received November 10, 2005; revision received April 21, 2006; accepted April 28, 2006.
In July 2004, the Joint Commission on Accreditation of Healthcare Organizations made standardized hospital performance measure data available to the general public through its website (www.qualitycheck.org). The measure data posted and the underlying measures are based on well-established clinical treatment guidelines,1,2 and in their current form, they have been standardized in coordination with the Centers for Medicare and Medicaid Services, adopted by the Hospital Quality Alliance, and endorsed by the National Quality Forum.3 They also have been used to track hospital performance for several years, albeit in slightly disparate forms,4–7 and the data currently are used by multiple stakeholders to demonstrate healthcare quality accountability to the public, purchasers, payers, and others.8,9 Despite the widespread acceptance of clinical measures among healthcare professionals, they have yet to be widely referenced among the general public or incorporated into more popular resources of hospital quality.10 This underuse may be due, at least in part, to a failure to educate consumers about the overall concept of quality in health care and the different elements that make up quality of care.11
Clinical Perspective p 564
Perhaps the most publicly recognizable resource of hospital quality is the annual US News & World Report issue “America’s Best Hospitals.” This report is now in its 16th edition and may be the most visible and widely circulated source of information on hospital performance used by the general public. The newsmagazine has >2 million regular subscribers,12 and special reports are made publicly available through the US News website, which boasts 2 million unique online users per month, with traffic spikes of 2 to 6 times that during key data postings such as the best hospital report.13 Results are further disseminated to the public through local press reports and through the marketing campaigns of hospitals eager to highlight their inclusion on the list.
Previous attempts to evaluate the validity of popular hospital quality resources such as the “Americas Best Hospitals” list and internet-based resources like HealthGrades.com, Inc have produced mixed results. Chen et al14 reported that hospitals ranked high on the US News & World Report list were associated with lower 30-day acute myocardial infarction (AMI) mortality rates (which is not surprising given the use of these mortality data as one of the criteria used in the ranking) and higher rates on process measures addressing aspirin use and β-blocker use. Multivariate analysis suggested that “a portion of the lower mortality associated with top-ranked hospitals was due to their greater use of aspirin and beta blockers.”14 Variation in the outcome and process measures observed across hospitals also revealed a portion of nonranked hospitals that performed as well as or better than “top performers.” The authors suggested that it may be possible for hospitals to achieve the outcomes associated with top performers by increasing the rates of evidenced-based practices that had been linked to short-term mortality via randomized clinical trials. Similarly, Krumholz et al15 reported that although hospitals with high HealthGrades ratings did perform slightly better on several measures of quality and outcome when viewed in the aggregate, the ratings did a poor job of distinguishing between any 2 individual hospitals.
Until recently, few alternatives to these popular sources of information were readily available to the general public. Now that clinical performance measure data are becoming more accessible and used in a number of new ways (eg, pay for performance), it is natural to ask how well hospitals identified as “America’s Best Hospitals” perform on measures that address specific evidence-based processes of care. In this study, we examined how well the US News & World Report top-ranked heart and heart surgery hospitals perform on AMI and heart failure process measures derived from American College of Cardiology (ACC) and American Heart Association (AHA) clinical treatment guidelines.
Two performance measure sets, AMI and heart failure, the specifications for which are publicly available through the Joint Commission website,4 address evidence-based processes of care for patients with cardiovascular conditions. Data for the 10 process measures (6 AMI and 4 heart failure measures) are reported to the Joint Commission as individual rates (see Table 1 for a description of each measure). Measure rates represent the number of times a hospital treated a patient in a manner that was consistent with specific evidence-based clinical practice guidelines divided by the number of opportunities presented. For example, a hospital’s observed rate for the AMI measure, aspirin given within 24 hours of arrival, represents the number of times the hospital provided aspirin to an AMI patient within 24 hours of arrival (numerator cases) divided by the number of AMI patients who were eligible to receive aspirin (denominator cases). Patients with contraindications for aspirin are excluded from the denominator population.
To facilitate the analysis, numerator and denominator data from all the measures in Table 1 were aggregated into a single index measure for each hospital, referred to as the cardiovascular composite measure. The denominator for this composite measure represents the number of opportunities that hospitals had to comply with evidence-based guidelines (with appropriate patient exclusions) for cardiovascular care across all 10 measures. The numerator for the index is the number of times the hospital actually complied with the recommended process of care. The cardiovascular composite measure rate is calculated by dividing the numerator (opportunities fulfilled) by the denominator (total opportunities).
Because each measure represents a process of care that is recommended by established clinical practice guidelines (including the opportunity for clinicians to exclude inappropriate patients), no attempt was made to differentiate among the processes based on importance. Consequently, the measures were not weighted in any way, because it is reasonable to expect that all eligible patients should receive care that is consistent with the evidence-based practice guidelines nearly 100% of the time.
To be eligible for inclusion on the 2005 US News &World Report Heart & Heart Surgery top 50 list, hospitals needed to be members of the Council of Teaching Hospitals, to have a medical school affiliation, or to have achieved a minimum score on a hospital-wide Key Technology Index. The technology index comprises key technology elements US News & World Report expects from a “best hospital.” Some examples include cardiac catheterization laboratory, cardiac intensive care beds, magnetic resonance imaging, open heart surgery, and ultrasound. In addition, hospitals needed an annual minimum of 500 surgical discharges and 770 medical discharges (1270 total discharges) and had to offer open heart surgery or a cardiac catheterization laboratory that performs angioplasties. These criteria were met by 853 US hospitals.16 The top 50 hospitals were then selected on the basis of the US News index score. This score is a weighted composite of hospital reputation data based on physician surveys, Medicare Provider Analysis and Review inpatient mortality data, number of discharges, number of on-staff registered nurses divided by the average daily census of patients, designation as a nurse magnet facility, the hospital’s Key Technology Index score, select patient/community services offered, qualifying hospice or palliative care services, and level 1 or 2 trauma services.16
To be included in the present study, hospitals had to meet 3 criteria. First, they must be Joint Commission accredited. Hospitals accredited by the Joint Commission represent >90% of all hospital beds in the United States. Second, hospitals had to submit data to the Joint Commission on all AMI and heart failure performance measures throughout 2004. Third, to ensure credible statistical analyses, hospitals must have submitted data for a minimum of 30 cases for the year on each of the 10 process measures of the cardiovascular composite measure.17 From these criteria, 774 hospitals were eligible and included in the study, including 41 of the top 50 US News & World Report heart and heart surgery hospitals. The 9 US News & World Report hospitals omitted from the study had not submitted data on all 10 AMI and heart failure measures to the Joint Commission during 2004.
The aggregate cardiovascular composite measure rate of the 41 top US News & World Report heart and heart surgery hospitals was compared with the composite rate for the other 733 Joint Commission hospitals using a 2-tailed t test. Individual hospital rates were then evaluated against the mean cardiovascular composite rate by creating a 99% confidence interval around the cardiovascular composite rate using each hospital’s standard error. Each hospital’s observed rate was compared with its corresponding confidence interval to determine statistical significance (P<0.05).18 The contribution of each measure to the composite measure was calculated using 2 strategies. First, the contribution of each measure was evaluated at the national level to determine the influence of each measure on the national cardiovascular composite. This was accomplished by dividing the number of denominator cases contributed by each measure by the sum of all denominator cases included in the calculation of the national composite rate. This process was then repeated at an individual hospital level to evaluate the contribution of each measure to the individual hospital composite. To assess the dispersion of the 41 US News & World Report hospitals throughout the larger population, all 774 hospitals were evenly grouped into deciles (n=77.4 hospitals per decile) based on their cardiovascular composite measure rates, and the count of US News & World Report hospitals in each decile was presented.
The authors had full access to the data and take full responsibility for their integrity. All authors have read and agree to the manuscript as written.
Comparison of Aggregate Performance
Across all 2004 discharges, the 41 top-rated US News & World Report heart and heart surgery hospitals provided care that was consistent with evidence-based practice guidelines 86% of the time, on average, as measured by the cardiovascular composite index. In comparison, the other 733 hospitals in the study provided care that was consistent with practice guidelines 83% of the time (see Table 2 for a summary of differences between the groups). The difference between these rates is statistically significant (P<0.05). Hospitals ranked by US News & World Report also performed slightly better, in the aggregate, on 5 of the 10 individual measures making up the composite. Differences between the groups ranged from 1% to 5% and were considered statistically significant (P<0.05) for 5 measures (Table 3).
Contributions of the individual performance measures to the cardiovascular composite index are presented in Table 3. Although there was clearly some variation in the contribution of each measure to the cardiovascular composite (for example, the aspirin at arrival measure contributes to 9.7% of the cardiovascular composite rate, the heart failure discharge instructions measure contributes to 16.7%, and the AMI smoking cessation counseling measure contributes to 4.7%), an analysis of individual hospital composite rates suggests that these individual measure proportions were quite stable across hospitals. In other words, the measures that make up the composite appear to be weighted consistently across hospitals, presumably because the measure exclusion criteria have a similar effect on the proportions of AMI patients who are eligible for individual measure populations (ie, one would not expect to find higher proportion of patients with contraindications to aspirin at one hospital versus another).
Comparison of Individual Performance
Twenty-eight US News & World Report hospitals scored above the study sample mean of 0.83 (23 by a statistically significant amount, P<0.01) and 13 scored below the mean (9 by a statistically significant amount, P<0.01). The cardiovascular composite index rates for the top US News & World Report organizations ranged from a high of 0.97 to a low of 0.74. For the study sample as a whole, the rates ranged from 0.99 to 0.48. Of the 774 hospitals in the study, 304 had a composite index score ≥0.86, the average rate for the 41 top US News & World Report hospitals.
For comparative purposes, all 774 hospitals were ranked on the basis of their cardiovascular composite index rate. Eight of the 41 US News & World Report hospitals were ranked among the top 100 hospitals in the study, and 4 were ranked in the top 50. The average rank of the 41 top US News & World Report hospitals among the 774 in the study was 314, ranging from a high of 14 to a low of 665 (Table 4 displays the dispersion of US News & World Report hospitals across the study population).
The measurement of healthcare quality has been approached from many directions. The US News & World Report approach is based on the Avedis Donabedian model of structure, process, and outcome.19 Because they did not have access to good data on healthcare processes, however, US News & World Report researchers substituted the “nomination” of hospitals by physicians as a proxy measure. Each year, US News & World Report surveys a sample of board-certified physicians in the specialty areas of interest to solicit nominations. For the 2005 rankings, they pooled the nominations from 2003, 2004, and 2005.16
In contrast, the measures of clinical process used in the present study are based on level I recommendations found in the ACC/AHA clinical guidelines.20,21 For example, it is well established that the early use of aspirin in AMI patients results in a significant reduction in adverse events and subsequent mortality. National guidelines strongly recommend early administration of aspirin for patients hospitalized with AMI.1 Similarly, according to the ACC/AHA practice guidelines, some of the most effective and underused practices in the treatment of heart failure are related to patient education. Addressing this gap, the heart failure discharge instructions measure represents the proportion of hospital patients who left the hospital with written instructions addressing their diet, medications, activity level, follow-up, weight, and symptoms. The ACC/AHA guidelines state that patient “nonadherence with diet and medications can rapidly and profoundly affect the clinical status of patients, and increases in body weight and minor changes in symptoms commonly precede by several days the occurrence of major clinical episodes that require emergency care or hospitalization. Patient education and close supervision, which includes surveillance by the patient and his or her family, can reduce the likelihood of nonadherence and lead to the detection of changes in body weight or clinical status early enough to allow the patient or a healthcare provider an opportunity to institute treatments that can prevent clinical deterioration.”2 Each of the measures included in the composite has been precisely defined, standardized, rigorously tested, and implemented on a national scale.22,23 When viewed together, through the window of the national comparative performance measurement database of the Joint Commission, the measures allow valid comparisons of the provision of healthcare processes across organizations.5
Aggregating these process measures into a cardiovascular composite measure or index combines many aspects of care into a single score or rating. The premise underlying the interpretation of composite measures is the notion that a hospital has a given number of opportunities to apply care that is consistent with evidence-based clinical practice guidelines. The cardiovascular composite index reports the number of times, or the percent of time, that the organization fulfilled that opportunity. As such, we believe that the composite represents one important dimension of hospital quality, but it is clearly not the only relevant dimension of hospital quality. Such composite measures have an added advantage in that they are more readily understood by consumers and other stakeholders.24
Although the use of a composite measure can raise concerns with respect to the possible disproportionate influence of individual measures on the composite rate, analysis of individual hospital rates confirmed that the proportion of individual measure contributions to the composite was at least consistent across hospitals. Although this consistency across hospitals does not address the debate over which evidence-based measures are most important, it does suggest that the comparisons between hospitals are fair (each has the same opportunity to achieve a high or low rate). From a practical standpoint, it also is important to note that a hospital’s performance on an individual measure is theoretically independent of its performance on the other measures. In other words, there is no incentive to perform 1 evidence-based practice to the exclusion of another. For example, providing aspirin at arrival to eligible AMI patients would not be expected to adversely affect the hospital’s opportunity to provide aspirin at discharge or the provision of written discharge instructions. For this reason, any attempt to differentially weight the evidence-based measures within the cardiovascular composite was deemed unnecessary and inappropriate.
In the aggregate, the hospitals selected by US News & World Report as providing the best heart care did perform statistically significantly better on the cardiovascular composite index and on half of the individual performance measures (5 of 10 measures making up the composite) compared with the other 733 hospitals included in this study. The question of whether this aggregate superiority translates to higher-quality care at any given hospital in the US News & World Report cohort on any particular day is unclear at best. Individually, 13 hospitals not in the US News & World Report cohort did better in providing these evidence-based aspects of heart care than did any of the 41 top US News & World Report organizations, and 313 nonranked hospitals did better than half of those ranked on the US News & World Report. Of the 41 top-ranked US News & World Report hospitals in the study, the best-performing organization took advantage of its opportunities to comply with evidence-based clinical practice guidelines 97% of the time; the lowest-performing US News & World Report hospital took advantage of those opportunities just 74% of the time. These modest aggregate differences are similar to those observed in previous studies, as was the significant variation observed in performance across hospitals.14,15
A number of limitations to the study exist. First, several measures included in the cardiovascular composite are approaching maximum performance limits (100%) and reveal little variation between hospitals. Inclusion of these measures may result in a more conservative analysis (in which differences among hospitals are masked by measures that address processes on which all hospitals seem to perform relatively well). Additional research is needed to establish the point at which evidence-based processes have become so consistently implemented that they can no longer differentiate among hospitals on the dimension of quality.
A second limitation is related to the selection criteria used in the study. We made no attempt to replicate the selection criteria used by US News & World Report, which previously identified 853 hospitals eligible for their ranking. All 774 hospitals in the study were Joint Commission–accredited hospitals; all submitted both AMI and heart failure data to the Joint Commission during 2004; and all were large enough to have a minimum of 30 cases per year for each of the 10 measures included in the composite. Therefore, the study includes the universe of hospitals for which a complete set of cardiovascular performance measure data are available to the public through the Joint Commission. As it happens, 41 of these hospitals also are included on the US News & World Report list of America’s best hospitals. Thus, the 733 hospitals included in the comparison group do not necessarily reflect the original sample of hospitals eligible for selection in the US News & World Report ranking (although significant overlap is likely). Some proportion of the 733 comparison group hospitals may not have been considered by the US News and World Report, and if they had been considered, perhaps they would have replaced some of the 41 that made it into the top 50 list. Despite these differences in sampling criteria, we think it is important for healthcare professionals (and the general public) to know that on measures directly assessing compliance on processes of care that are recommended by the ACC/AHA guidelines, many hospitals do well. On the other hand, other hospitals (even some among those identified as America’s best) still have considerable room for improvement.
In conclusion, the fact that so many hospitals did as well as or better than those ranked at the top by US News & World Report is good news for many Americans who may not have access to those highly rated institutions. On the basis of the measures used here, there are many other hospitals across the country where patients can receive high-quality heart care. One hundred sixty-seven hospitals in this study implemented processes known to be necessary to good heart care ≥90% of the time. Although room for improvement in American health care remains, even among those institutions having the best reputations, these data show that good heart care, based on ACC/AHA clinical guidelines, is being provided routinely by a much wider range of hospitals than is generally recognized.
Few would argue that the criteria used by US News & World Report to rank hospitals are unrelated to healthcare quality or that the hospitals identified within their top 50 are not excellent institutions. Nevertheless, a number of them fall short in routinely applying evidence-based care for their heart patients. At the same time, many lesser-known organizations routinely provide care that is consistent with measures of nationally established guidelines. Such measures provide a previously unavailable dimension of quality that can now offer a significant contribution toward helping consumers make informed decisions about selecting a hospital. To know how well an individual hospital complies with ACC/AHA evidenced-based processes of care, it is necessary to review the data for that individual hospital in comparison to its peers. Fortunately, it is now possible for healthcare professionals and the general public to do just that.
The authors thank Stephen Schmaltz, PhD, for his assistance in reviewing this article.
All authors are employees of the Joint Commission on Accreditation of Healthcare Organizations. Dr Williams has received research grants 047139 (RWJF) and 4007SC (UCSF subcontract) for a study to evaluate smoking cessation counseling. Dr Loeb has received research grant 1-U18-HSO13728-01 AHRQ to study performance measurement in hospitals.
Antman EM, Anbe DT, Armstrong PW, Bates ER, Green LA, Hand M, Hochman JS, Krumholz HM, Kushner FG, Lamas GA, Mullany CJ, Ornato JP, Pearle DL, Sloan MA, Smith SC Jr. ACC/AHA guidelines for the management of patients with ST-elevation myocardial infarction: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee to Revise the 1999 Guidelines for the Management of Patients with Acute Myocardial Infarction), 2004. Available at: http://www.acc.org/qualityandscience/clinical/guidelines/stemi/Guideline1/index.htm. Accessed May 1, 2006.
Hunt SA, Abraham WT, Chin MH, Feldman AM, Francis GS, Ganiats TG, Jessup M, Konstam MA, Mancini DM, Michl K, Oates JA, Rahko PS, Silver MA, Stevenson LW, Yancy CW. ACC/AHA 2005 guideline update for the diagnosis and management of chronic heart failure in the adult: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Writing Committee to Update the 2001 Guidelines for the Evaluation and Management of Heart Failure). Available at: http://www.acc.org/clinical/guidelines/failure/index.pdf. Accessed May 1, 2006.
Joint Commission on Accreditation of Healthcare Organizations. Specification Manual for National Hospital Quality Measures 2005. Available at: http://www.jointcommission.org/PerformanceMeasurement/PerformanceMeasurement/Current+NHQM+Manual.htm. Accessed May 1, 2006.
CMS Office of Public Affairs. Medicare “pay for performance (P4P)” initiatives [fact sheet]. January 31, 2005. Available at: http://www.cms.hhs.gov/apps/media/press/release.asp?Counter=1343. Accessed May 1, 2006.
Kuhn. H. Statement on pay for performance initiatives. Testimony before: Subcommittee on Health of the Committee on Ways and Means; March 15, 2005. Available at: http://www.hhs.gov/asl/testify/t050315a.html. Accessed May 1, 2006.
Bacon’s media source. Available at: http://www.bacons.com. Accessed August 11, 2005.
U.S. News & World Report: USNews.com Media Center. Available at: http://www.usnews.com/usnews/media./ Accessed May 1, 2006.
Chen J, Radford MA, Wang Y, Marciniak T, Krumholtz H. Do “America’s best hospitals” perform better for acute myocardial infarction. N Engl J Med. 1999; 340: 296–292.
McFarlane E, Olmsted M, Murphy J, Hill C. RTI International: America’s best hospitals 2005 methodology. Available at: http://www.usnews.com/usnews/health/best-hospitals/methodology/ABH_Methodology_2005.pdf. Accessed May 1, 2006.
Neter J, Wasserman W, Whitmore GA. Applied Statistics. 2nd ed. Boston, Mass: Allyn and Bacon, Inc; 1982.
Casella G, Berger RL. Statistical Inference. Belmont, Calif: Duxbury Press; 1990.
Donabedian A. Evaluating the quality of medical care. Milbank Memorial Fund Q. 1996; 44: 166–203.
Krumholz HM, Anderson JL, Brooks NH, Fesmire FM, Lambrew CT, Landrum MB, Weaver WD, Whyte J. ACC/AHA clinical performance measures for adults with ST-elevation and non–ST-elevation myocardial infarction: a report of the ACC/AHA Task Force on Performance Measures (ST-Elevation and Non–ST-Elevation Myocardial Infarction Performance Measures Writing Committee). J Am Coll Cardiol. 2006; 47: 236–265.
Bonow RO, Bennett S, Casey DE JR, Ganiats TG, Hlatky MA, Konstam, MA, Lambrew CT, Normand S-LT, Piña IL, Radford MJ, Smith AL, Stevenson LW. ACC/AHA clinical performance measures for adults with chronic heart failure: a report of the ACC/AHA Task Force on Performance Measures (Writing Committee to Develop Heart Failure Clinical Performance Measures). J Am Coll Cardiol. 2005; 46: 1144–1178.
Joint Commission on Accreditation of Healthcare Organizations. A comprehensive review of development and testing for national implementation of hospital core measures. Available at: http://www.jointcommission.org/NR/ rdonlyres/48DFC95A-9C05–4A44-AB05–1769D5253014/0/AComprehensiveReviewofDevelopmentforCoreMeasures.pdf. Accessed May 1, 2006.
Williams SC, Watt A, Schmaltz SP, Koss RG, Loeb JM. Assessing the reliability of standardized performance measures. Int J Qual Health Care. 2006; 18: 246–255.
Courtney J, Krumholz H, Wang Y, Turnbull B. Using composite performance measures for the public reporting of hospital performance data.: Paper presented at: International Society for Quality in Health Care, Paris Indicators Summit; Paris, France; November 2002. Available at: http://www.isqua.org/isquaPages/Conferences/paris/ParisAbstractsSlides/IndicatorProgram/MondayPoster/MondayPoster.pdf/Ind027%20-%20Courtney.pdf. Accessed May 1, 2006.
This article presents clinicians with a comparative analysis of data from 774 hospitals accredited by the Joint Commission on Accreditation of Healthcare Organizations, including 41 hospitals identified by US News & World Report as “America’s Best Heart and Heart Surgery Hospitals.” The measures used in this comparison specifically address many of the evidence-based processes of care recommended by the American College of Cardiology and the American Heart Association heart failure and acute myocardial infarction treatment guidelines. Although these treatment guidelines have been available for many years, only recently have clinicians had access to data from reliable, standardized performance measures that can be used to compare the frequency with which hospitals provide these evidence-based processes of care to their patients. Results of the analysis revealed considerable variation in performance across hospitals, even among hospitals with the best reputations, as depicted by US News and World Report. Consequently, although a variety of metrics can be used to evaluate healthcare quality, if one wants to know how well an individual hospital complies with American College of Cardiology/American Heart Association evidenced-based processes of care, it is critical to review data for each hospital and compare it with its peers. Fortunately, clinicians (and the general public) can now freely access these data online (www.qualitycheck.org, www.hospitalcompare.hhs.gov), making it possible to use the information during the patient referral decision-making process.