The Need for Multiple Measures of Hospital QualityClinical Perspective
Results From the Get With The Guidelines–Heart Failure Registry of the American Heart Association
Jump to

Abstract
Background—Process and outcome measures are often used to quantify quality of care in hospitals. Whether these quality measures correlate with one another and the degree to which hospital provider rankings shift on the basis of the performance metric is uncertain.
Methods and Results—Heart failure patients ≥65 years of age hospitalized in the Get With the Guidelines–Heart Failure registry of the American Heart Association were linked to Medicare claims from 2005 to 2006. Hospitals were ranked by (1) composite adherence scores for 5 heart failure process measures, (2) composite adherence scores for emerging quality measures, (3) risk-adjusted 30-day death after admission, and (4) risk-adjusted 30-day readmission after discharge. Hierarchical models using shrinkage estimates were performed to adjust for case mix and hospital volume. There were 19 483 patients hospitalized from 2005 to 2006 from 153 hospitals. The overall median composite adherence rate to heart process measures was 85.8% (25th, 75th percentiles 77.5, 91.4). Median 30-day risk-adjusted mortality was 9.0% (7.9, 10.4). Median risk-adjusted 30-day readmission was 22.9% (22.1, 23.5). The weighted κ for remaining within the top 20th percentile or bottom 20th percentile was ≤0.15 and the Spearman correlation overall was ≤0.21 between the different measures of quality of care. The average shift in ranks was 33 positions (13, 68) when criteria were changed from 30-day mortality to readmission and 51 positions (22, 76) when ranking metric changed from 30-day mortality to composite process adherence.
Conclusions—Agreement between different methods of ranking hospital-based quality of care and 30-day mortality or readmission rankings was poor. Profiling quality of care will require multidimensional ranking methods and/or additional measures.
Introduction
Attention to the quality and value of health care has increased significantly in the United States with the passage the Affordable Care Act.1 Core principles for reform include improving quality of care and reducing the costs of care. As part of this effort, multiple agencies have promoted the idea that quality of care be both measured and transparently disclosed to promote quality improvement and informed consumer decision making.2 The challenge, however, is to define the metrics for provider performance profiling.3–5 To date, these measures have included evidence-based care processes as well as various outcome metrics.
Clinical Perspective on p 719
Heart failure is one of the main areas targeted for improvement in quality and costs. Prior studies have documented gaps in the use of evidence-based therapies as well as nonideal outcomes including high early mortality and need for readmission.6–11 Moreover, these process and outcome metrics vary considerably among providers.11 Yet there is limited information on how these various aspects of quality correlate with one another, which presents challenges to the public, hospitals, and policy makers designing pay-for-performance policies. Therefore, we sought to evaluate the correlation and potential impact on hospital rankings of different measures of hospital quality on the basis of discharge processes and 30-day outcomes for patients hospitalized with heart failure using the Get With The Guidelines–Heart Failure (GWTG-HF) registry.
Methods
Data Sources
We merged data from the GWTG-HF registry with enrollment files and in-patient claims from the Centers for Medicare and Medicaid Services (CMS) from January 1, 2005, through December 31, 2006, and patients had follow-up through the end of 2006. The design, inclusion criteria, and data collection methods have been previously published.9,12 Briefly, patients were eligible for inclusion in the registry if they were admitted for an episode of worsening heart failure or developed significant heart failure symptoms during a hospitalization for which heart failure was the primary discharge diagnosis. Hospital teams used heart failure case-ascertainment methods similar to those used by the Joint Commission.13 Data on medical history, signs and symptoms, medications, contraindications for or intolerance to medications, and diagnostic test results were collected via a Web-based registry. All regions of the United States were represented, and a variety of centers, from community hospitals to large tertiary centers, participated.
All participating institutions were required to comply with local regulatory and privacy guidelines and, if applicable, to obtain institutional review board approval. Because the data were used primarily at the local site for quality improvement, sites were granted a waiver of informed consent under the Common Rule. Outcome Sciences Inc (Cambridge, MA) served as the registry coordinating center. The Duke Clinical Research Institute (Durham, NC) served as the data analysis center.
The CMS files included data for all fee-for-service Medicare beneficiaries ≥65 years of age who were hospitalized with a diagnosis of heart failure (International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM] 428.x, 402.x1, 404.x1, and 404.x3). We merged patient data in the registries with Medicare Part A in-patient claims, matching by admission and discharge dates, hospital, date of birth, and sex. Of the 40 659 hospitalizations of patients ≥65 years of age, we matched 30 484 (75.2%) to fee-for-service Medicare claims from 215 hospitals representing 25 277 patients.
Process and Outcome Measures
The GWTG-HF program uses the same criteria as CMS and the Joint Commission to determine treatment eligibility for core performance measures and American College of Cardiology/American Heart Association heart failure hospital performance measures. The detailed specifications for each measure have been published previously, and the measure construct accounts for eligibility of patients for a process measure.14 Achievement measures include discharge instructions, documentation of left ventricular (LV) function, angiotensin-converting enzyme inhibitor or angiotensin receptor blocker use in patients with LV systolic dysfunction (SD), smoking cessation counseling, and β-blocker use in patients with LVSD. Emerging measures of quality include process measures based on Class I recommendations from the 2005 American College of Cardiology/American Heart Association guidelines and include anticoagulation for atrial fibrillation, aldosterone antagonist use for patients with LVSD, evidence-based β-blocker use in patients with LVSD, blacks with LVSD discharged on hydralazine/isosorbide combination, discharge blood pressure <140/90 mm Hg, and implantable cardioverter-defibrillator planned or implanted in patients with an ejection fraction ≤30%.15 Outcome measures include risk-adjusted 30-day mortality from admission and risk-adjusted 30-day readmission from discharge.
The opportunity-based composite score was calculated as the sum of the total instances that a required measure was performed (ie, correct care given) divided by the total number of eligible opportunities (based on the number of measures) across all patients at a given hospital. This method is identical to that used by CMS and was the basis for comparing the correlation of achievement and emerging measures as well as outcome measures.
Analysis Population
The study population included hospitals fully participating in the GWTG-HF registry from January 2005 to December 2006. Hospitals with <15 cases were excluded (193 patients and 23 hospitals). Eligible patient populations for achievement or emerging measures were defined by previously described eligibility criteria. For all process measures, patients with documented contraindications or other medical exceptions for that therapy, as well as those documented to be comfort care only are excluded. Discharge from the hospital with palliative care or hospice care also excludes a patient from each discharge process measure (n=400; 2.05%). For outcome measures, the framework followed is based on the currently reported outcomes from CMS, which currently does not exclude patients discharged on hospice care. For outcome measures, transfer-outs were also excluded.
Statistical Analysis
We summarized baseline characteristics and hospital characteristics using percentages for categorical variables and median and 25th, 75th percentiles for continuous variables. In addition, hospitals are classified into 3 groups based on risk-adjusted 30-day mortality rates, top 20%, middle 60%, and bottom 20%, which is consistent with pay-for-performance programs.16 Patient characteristics are compared across the 3 groups using the Pearson χ2 test for categorical variables and the Kruskal-Wallis test for continuous variables. Hospital performance on individual achievement measures, emerging measures, the composites, and risk-adjusted outcomes are summarized on hospital level, reported using median and 25th, 75th percentiles and compared across the 3 groups of hospitals using the Kruskal-Wallis test.
A hierarchical multivariable logistic regression model was performed to adjust the outcomes and composite scores for patient case mix and calculate risk-adjusted outcomes. In the analyses, hospitals were treated as random effects to model hospital effects and calculate hospital-specific outcomes. For clinical outcomes, the patient-based data were used. For the composite scores to achievement and emerging measures, the analysis used opportunity-based data. Each opportunity (each measure for which a patient was eligible) contributed an observation, and the outcome was a dichotomous variable with value 1 (positive) or 0 (negative), indicating whether the opportunity was fulfilled. For example, if a patient was eligible for 5 measures and received 4, the patient would have 5 observations in the analysis data set, of which 4 would be positive events. In this opportunity-based analysis, a measure indicator was included to adjust the composite scores for treatment opportunity mix because adherence rates to individual measures vary considerably and the mix of the treatment opportunity faced by a hospital could influence hospital ranking.
A standard list of GWTG-HF clinical variables was employed in all models, including demographics (age; female sex; race; black, Hispanic, or other versus white; insurance), past medical history (atrial fibrillation, chronic obstructive pulmonary disease, diabetes mellitus, hypertension, hyperlipidemia, peripheral vascular disease, ischemic heart disease, previous stroke or transient ischemic attack, anemia, long-term dialysis, depression, smoking status, nonischemic valvular history, history of prior admission for heart failure), vital signs (systolic blood pressure, heart rate, respiratory rate), admission labs (serum sodium, blood urea nitrogen, creatinine, ratio of blood urea nitrogen to serum creatinine, hemoglobin, troponin above upper limit of normal, brain natriuretic peptide), heart failure characteristics at presentation (dyspnea, acute pulmonary edema, volume overload/weight gain, dizziness/syncope, worsening fatigue, pulmonary congestion), and other contributing conditions. Overall, missing data were ≤8% with the exception of hemoglobin (13%), brain natriuretic peptide (28%), and troponin normal/abnormal status (26%). The following missing variables were imputed: sex as male; race as white; past medical history and contributing factors as no with the exception of history of ischemic heart disease; vital signs and labs were imputed as the median value; troponin status was imputed as normal; and weight was imputed to sex-specific median weight values. For the clinical outcomes, the fitted prediction model has comparable discrimination with publicly reported models (C-index for 30-day-mortality model=0.75 and 30-day readmission model=0.65).11 The methods used are also similar to standards for statistical models for public reporting.6
Hospital risk-adjusted outcomes or composite scores were calculated using observed rate divided by expected rate given the risk factors of the patients and then multiplied by the national average rate. The shrinkage estimation method was used to account for hospital case volume. Hospitals were first ranked on the basis of the risk-adjusted outcomes or composite scores of achievement or emerging measures and then divided into the 3 financial-incentive categories (top 20% performers, middle 60% performers, and bottom 20% performers) on the basis of prior studies.16 The hospital ranking and the corresponding incentive groups were compared across the 5 criteria. Spearman rank correlation and weighted κ were reported to describe the degree of agreement between any 2 of the 5 criteria.
All P values were 2-tailed with statistical significance set at 0.05. All statistical analyses were performed using SAS version 9.2 (SAS Institute Inc, Cary, NC).
Results
The 2005 to 2006 GWTG-HF linked to CMS claims population included 19 952 patients from 176 hospitals. After excluding non–heart failure admissions (n=276) and hospitals with <15 cases, the final study population was 19 483 patients in 153 hospitals.
Table 1 shows the patient characteristics overall and by top 20%, middle 60%, and bottom 20% of hospitals on the basis of 30-day mortality. Although there are some statistical differences between hospital groups, there are no large or consistent differences in clinical characteristics. The median age of the study population was 80 years with 45% male and 82% white. Comorbid conditions were common with >60% with ischemic heart disease, 39% with diabetes mellitus, 36% with atrial fibrillation, and 19% with chronic kidney disease. Other clinical characteristics included a median systolic blood pressure of 138 mm Hg, median sodium level of 138 mg/dL, serum creatinine of 1.3 mg/dL, and a median ejection fraction of 40%.
Patient Characteristics, Overall and Stratified by Hospital Deciles of Performance Based Upon Risk-Adjusted 30-Day Mortality After Admission
Hospitals in the study varied in size, region, and structure. The median bed size was 233 ([25th, 75th percentile] 107, 375). The regional distribution of hospitals was 16.3% Northeast, 35.3% Midwest, 32.0% South, and 11.8% West. Percutaneous coronary intervention facilities were available in 64.7% and cardiac surgery was available in 47.1% of the hospitals.
Table 2 shows the performance by hospitals for GWTG-HF achievement, emerging, and outcome measures at the top 20%, middle 60%, and bottom 20% of hospitals on the basis of 30-day mortality, given that pay-for-performance policies generally reward the top 2 deciles and penalize the bottom 2 deciles.16 In general, achievement measures were met by the majority of hospitals with the median performance >75% for all achievement measures and the median opportunity-based composite 85%. Emerging measures had adherence rates that were much lower with much wider variation in adherence to the individual measures across hospitals, with the median hospital performance ranging from 0% for hydralazine/isosorbide in eligible patients to 75% for discharge blood pressure <140/90 mm Hg. Outcomes across hospitals also varied in a manner similar to what has been reported by CMS.11 The median risk-adjusted 30-day readmission rate was 22.9%, and the median risk-adjusted 30-day mortality rate was 9.0%.
Hospital Performance Based Upon Risk-Adjusted 30-Day Mortality After Admission
Table 3 also shows the agreement between achievement, emerging, and outcome measures on the basis of incentive categories as well as the median shift in ranks. Across all measures, the weighted κ among incentive category status (top and bottom deciles) was ≤0.15, indicating the correlation was poor to none. The median shift in rankings ranged from 33 to 51 positions with wide interquartile ranges across all composite scores or outcome relationships.
Agreement Between Ranking by Outcomes and Performance Measures
Table 4 shows the correlation of individual process measures with risk-adjusted 30-day outcome measures. In general, there is little correlation of any of the process measures with 30-day outcome measures with the exception of evidence-based β-blocker and 30-day readmission. Likewise, there is modest correlation with measurement of LV function and 30-day readmission.
Correlation With Process Measures and Risk-Adjusted Outcome Measures
In the Figure, A plots the correlation of risk-adjusted composite scores of achievement versus emerging measures, demonstrating modest correlation. B shows minimally significant positive correlation of 30-day risk-adjusted mortality versus 30-day risk-adjusted readmission.
A, Correlation with achievement and emerging measures (Spearman correlation =0.21; P=0.008). B, Correlation of 30-day readmission with 30-day mortality (Spearman correlation =0.17; P=0.03).
There is also no relationship between 30-day readmission with achievement measures (Spearman correlation=−0.11; P=0.17) and emerging measures (Spearman correlation=0.086; P=0.29). Similarly, there is no relationship between 30-day mortality and achievement measures (Spearman correlation=−0.04; P=0.64) or emerging measures (Spearman correlation=−0.03; P=0.68).
Discussion
There has been great attention to defining the quality and value of health care through reporting of performance and outcome measures. In this analysis, we demonstrate that different measures may rank hospitals in substantially different ways depending on whether a set of process measures or outcome measures are used. Therefore, focusing on a single area of performance or single measure is unlikely to yield gains in other areas.
Our study has important implications for health policy. As public reporting and pay-for-performance policies grow for performance and outcome measures, it is important to recognize the heterogeneity in performance and the lack of correlation among different measures of quality. To truly consistently reward the highest-quality hospitals, a multidimensional ranking method will likely be needed. It is also important to carefully consider that profiling hospitals on the basis of a single measure may misalign incentives, as a hospital that performs well for 30-day readmission may not perform well in an equally important area such as 30-day mortality. Therefore, only focusing on reducing readmission through programs that provide penalties for centers with high risk-adjusted readmissions will not clearly raise overall quality, as has been suggested.1 Some hospitals, especially those serving vulnerable patient populations may be unfairly penalized. Even with multiple measures, there are potential downstream consequences of penalizing hospitals that need the most support, leaving unintended effects on quality.17,18 In addition, hospitals may simply shift their resources to focus on the most easily obtainable or financially important area with a reduction in resources elsewhere.19
Our analysis, showing minimal correlation between 2 different sets of measures (achievement versus emerging), suggests that adding new measures of quality may be helpful to appropriately discriminate hospitals providing care consistent with professional guidelines. In addition, there is little correlation of individual process measures with outcome measures. Although most attention in measuring quality of care has focused on process measures of care, in many areas there are limited measures currently in use. For example, there are only 4 processes that have become publically reported measures for heart failure despite professional guidelines indicating several opportunities for establishing performance measures based on class I guideline-recommended therapies.20 Although performance has varied across hospitals for CMS core measures, it is becoming more challenging to discriminate hospitals based on the current process measures used and, in some areas, retirement of measures have occurred because of such high performance.5,21 Furthermore, the additional measures included in our study were developed like existing criteria for other performance measures, evaluating the interpretability, applicability, and feasibility of measures and could even be easily integrated into evaluation of quality of care.22,23
Since process measures only reflect a limited portion of care, other measures focused on outcomes have been developed. Outcome measures on 30-day mortality and 30-day readmission have highlighted significant variation across hospitals in the United States that remains despite increased national attention.11,24 Whereas these measures may integrate all aspects of care, some have raised concern that hospitals may be poorly profiled with higher readmission rates because of lower mortality rates.25 Because our analysis only found a very modest association between hospital performance and 30-day mortality and 30-day rehospitalization, policies should consider rewarding hospitals on the basis of both 30-day risk-adjusted mortality and 30-day risk-adjusted readmission or the combination of death and readmission.
Previous studies have suggested that measuring quality is complex and there is not always a direct relationship between processes and outcomes. In a study of the Organized Program to Initiate Lifesaving Treatment in Hospitalized Patients With Heart Failure (OPTIMIZE-HF) linked to Medicare claims, hospital process performance of CMS/Joint Commission core measures was not associated with patient outcomes within 1 year of discharge.3 In another study examining emerging measures of performance, several evidence-based processes of care such as aldosterone antagonists, implantable cardioverter-defibrillator therapy, and evidence-based β-blocker use were modestly associated with improved outcomes and may discriminate hospital-level quality of care.4 Finally, a study examining discharge instructions and readmission rates also showed that there was no relationship.26 In combination with these studies, the present analyses support the use of additional measures of quality as well as integrating outcome measures to have a multidimensional assessment of hospital quality. As process measures emerge, rigorous evaluations should occur to understand the associations with readmission, mortality, or other clinical outcomes that can reflect differences in care quality. Others have suggested consideration of domains of quality that integrates processes and outcomes including a method for deriving a composite.27 Finally, the American College of Cardiology and the American Heart Association have published a position statement on composite measures that provides a framework for selection, aggregation, and weighting of measures.28
Limitations
There are several limitations in our study that should be noted. The patient population studied is Medicare fee-for-service beneficiaries enrolled in GWTG-HF and may not be representative of all patients hospitalized with heart failure. However, the outcomes and variation are likely conservative given that hospitals volunteer to participate in a quality-improvement registry. Recent data suggest that patients in the predecessor of GWTG-HF, OPTIMIZE-HF, were reasonably representative of CMS patients. The models used in this study factored in age, race/ethnicity, multiple comorbidities, and hospital characteristics; however, we were not able to adjust for socioeconomic factors or adherence to follow-up. Thus, there may also be other measured or unmeasured confounding variables that, had they been adjusted for, would have altered the findings on hospital variation in outcomes. Care for patients hospitalized for heart failure is complicated, yet there are relatively few process measures used to describe quality of care. Thus, it may be difficult to relate overall quality of care as indexed by process measures with clinical outcomes. There may also be contraindications or medical exceptions that are present but not documented in the medical record, particularly for emerging measures, which may have altered the results had there been full documentation. We did not assess health-related quality of life, functional capacity, patient satisfaction, and other clinical outcomes that may be of interest.
Conclusions
There are multiple methods to assess and reward quality of care for hospitals. Although attention has shifted from process measures to outcome measures, the correlation of any measure with another measure of quality is minimal. Hospital ranking shifts substantially depending on which of the measures are selected to rank hospitals. Future profiling of hospitals that includes payments for performance or penalties for not meeting standards should be done cautiously, given the disagreement between different measures of quality, or should involve multidimensional measures.
Sources of Funding
This work was supported by American Heart Association Pharmaceutical Roundtable Outcomes Center grant 087512N. Dr Hernandez is a recipient of an American Heart Association Pharmaceutical Roundtable grant (0675060N). Dr Fonarow is supported by the Ahmanson Foundation (Los Angeles, CA) and holds the Eliot Corday Chair in Cardiovascular Medicine and Science.
Disclosures
Dr Hernandez reports receiving research support from Johnson & Johnson (significant), Proventys (significant), and Amylin (significant); and honoraria from Amgen (modest) and Corthera (modest). Dr Fonarow reported receiving research funding from the National Heart, Lung, and Blood Institute and AHRQ (both significant), consulting fees from Novartis (significant) and Pfizer (modest), and honoraria from Medtronic (modest). Dr Heidenreich reported research support from Medtronic. Dr Peterson serves as the principal investigator for the American Heart Association for GWTG data analysis. Drs Yancy and Liang report no conflicts.
- Received February 11, 2011.
- Accepted June 7, 2011.
- © 2011 American Heart Association, Inc.
References
- 1.↵
- 2.↵
Hospital Compare. US Department of Health and Human Services Web site. http:// http://www.hospitalcompare.hhs.gov. Accessed October 11, 2010.
- 3.↵
- 4.↵
- 5.↵
- 6.↵
- Krumholz HM,
- Brindis RG,
- Brush JE,
- Cohen DJ,
- Epstein AJ,
- Furie K,
- Howard G,
- Peterson ED,
- Rathore SS,
- Smith SC Jr.,
- Spertus JA,
- Wang Y,
- Normand SL
- 7.↵
- Bonow RO
- 8.↵
- 9.↵
- 10.↵
- 11.↵
- Bernheim SM,
- Grady JN,
- Lin Z,
- Wang Y,
- Wang Y,
- Savage SV,
- Bhat KR,
- Ross JS,
- Desai MM,
- Merrill AR,
- Han LF,
- Rapp MT,
- Drye EE,
- Normand SL,
- Krumholz HM
- 12.↵
- 13.↵
Specification Manual for National Hospital Quality Measures. The Joint Commission Web site. http://www.jointcommission.org/PerformanceMeasurement/PerformanceMeasurement/Historical+NHQM+manuals.htm. Accessed October 11, 2010.
- 14.↵
- Bonow RO,
- Bennett S,
- Casey DE Jr.,
- Ganiats TG,
- Hlatky MA,
- Konstam MA,
- Lambrew CT,
- Normand SL,
- Pina IL,
- Radford MJ,
- Smith AL,
- Stevenson LW,
- Burke G,
- Eagle KA,
- Krumholz HM,
- Linderbaum J,
- Masoudi FA,
- Ritchie JL,
- Rumsfeld JS,
- Spertus JA
- 15.↵
- Hunt SA,
- Abraham WT,
- Chin MH,
- Feldman AM,
- Francis GS,
- Ganiats TG,
- Jessup M,
- Konstam MA,
- Mancini DM,
- Michl K,
- Oates JA,
- Rahko PS,
- Silver MA,
- Stevenson LW,
- Yancy CW,
- Antman EM,
- Smith SC Jr.,
- Adams CD,
- Anderson JL,
- Faxon DP,
- Fuster V,
- Halperin JL,
- Hiratzka LF,
- Jabobs AK,
- Nishimura R,
- Ornato JP,
- Page RL,
- Riegel B
- 16.↵
- 17.↵
- 18.↵
- 19.↵
- 20.↵
- Jessup M,
- Abraham WT,
- Casey DE,
- Feldman AM,
- Francis GS,
- Ganiats TG,
- Konstam MA,
- Mancini DM,
- Rahko PS,
- Silver MA,
- Stevenson LW,
- Yancy CW
- 21.↵
- 22.↵
- Bonow RO,
- Masoudi FA,
- Rumsfeld JS,
- Delong E,
- Estes NA III.,
- Goff DC Jr.,
- Grady K,
- Green LA,
- Loth AR,
- Peterson ED,
- Pina IL,
- Radford MJ,
- Shahian DM
- 23.↵
- Bonow RO,
- Bennett S,
- Casey DE Jr.,
- Ganiats TG,
- Hlatky MA,
- Konstam MA,
- Lambrew CT,
- Normand SL,
- Pina IL,
- Radford MJ,
- Smith AL,
- Stevenson LW,
- Bonow RO,
- Bennett SJ,
- Burke G,
- Eagle KA,
- Krumholz HM,
- Lambrew CT,
- Linderbaum J,
- Masoudi FA,
- Normand SL,
- Ritchie JL,
- Rumsfeld JS,
- Spertus JA
- 24.↵
- Keenan PS,
- Normand SL,
- Lin Z,
- Drye EE,
- Bhat KR,
- Ross JS,
- Schuur JD,
- Stauffer BD,
- Bernheim SM,
- Epstein AJ,
- Wang Y,
- Herrin J,
- Chen J,
- Federer JS,
- Mattera JA,
- Wang Y,
- Krumholz HM
- 25.↵
- 26.↵
- 27.↵
The STS composite quality measurement methodology: executive summary. Society of Thoracic Surgeons Web site. http://www.sts.org/sites/default/files/documents/pdf/QualityExecutiveSummary-Final.pdf. Accessed May 3, 2011.
- 28.↵
- Peterson ED,
- DeLong ER,
- Masoudi FA,
- O'Brien SM,
- Peterson PN,
- Rumsfeld JS,
- Shahian DM,
- Shaw RE,
- Goff DC Jr.,
- Grady K,
- Green LA,
- Jenkins KJ,
- Loth A,
- Radford MJ
Clinical Perspective
Attention to the quality and value of health care has increased significantly in the United States with the passage the Affordable Care Act. Heart failure is one of the main areas targeted for improvement in quality and costs nationally. Prior studies have shown major gaps in the use of best therapies, and outcomes are poor. Although there is attention to improving processes of care and outcomes, it is unclear how these metrics of quality correlate with one another or whether hospitals should focus on one area to improve overall quality. This uncertainity also presents challenges to the public, hospitals, and policy makers in designing and participating in pay-for-performance health policies. By using data from the Get With The Guidelines–Heart Failure (GWTG-HF) registry linked with Medicare claims, we examined how profiling of hospitals is affected by different measures of quality of care. In general, agreement between different methods of ranking hospital-based quality of care and 30-day mortality or readmission rankings was poor. Profiling quality of care will require multidimensional ranking methods and/or additional measures.
This Issue
Jump to
Article Tools
- The Need for Multiple Measures of Hospital QualityClinical PerspectiveAdrian F. Hernandez, Gregg C. Fonarow, Li Liang, Paul A. Heidenreich, Clyde Yancy and Eric D. PetersonCirculation. 2011;124:712-719, originally published August 8, 2011https://doi.org/10.1161/CIRCULATIONAHA.111.026088
Citation Manager Formats








