Performance Metrics as Drivers of Quality
Getting to Second Gear
For simplicity’s sake, the history of US health care performance improvement can be divided into the eras before and after the 1999 publication of To Err is Human by the Institute of Medicine.1 Before publication of that report, the conventional wisdom was that the quality of health care was generally excellent, and, in any case, impossible to measure. Guidelines had been developed for some issues, but the preservation of physicians’ autonomy to do what they judged best for patients was often invoked as the first principle of high quality health care.
Article see p 980
In truth, most health care providers knew well before 1999 that the real story was more complex – that, despite the hard work of good people, health care often fell short of what it could and should be. Then, To Err is Human and its 2001 successor, Crossing the Quality Chasm,2 drew national attention to startling gaps in safety, reliability, efficiency, and the overall experience of care. It was at this point that hospital Boards of Trustees throughout the country began to ask management about quality, and quality became a routine part of the Board and management meetings.
At many institutions, the work of improving health care was deconstructed – leading to several new roles and an abundance of new metrics. New titles included Chief Quality Officer, Chief Safety Officer, Chief Patient Experience Officer, Chief Population Management Officer, and Chief Value Officer. Sometimes these roles were combined, sometimes not. To manage these different domains of quality, internally-derived and externally-imposed measures proliferated, and hospitals hired a dozen or more personnel simply to collect and track the data.
Inevitably, complaints about measurement fatigue and skepticism about performance measurement emerged.3 A commonly-voiced complaint from many clinicians was “Just tell me what you want me to do. Tell me the three most important things.” Leaders of health care organizations asked which of the dozens of pages of reports on their desks truly mattered.
The painful answer seems to be all of them. Reliability matters. Safety matters. Efficiency matters. Patient experience matters. All of these dimensions of performance are intertwined, and interact to define the quality of an institution’s care.
Life was simpler when the major metric by which institutions judged their success was financial performance. In truth, these varied quality metrics are much better reflections of what truly defines success for health care organizations, so their complexity should not be viewed as unsolvable problem. Indeed, they are already driving improvement in outcomes – but there are some difficult steps to be taken before performance improvement moves into a higher gear.
In this issue of Circulation, Mehta et al4 set the stage for the work ahead by revealing the interactions among two dimensions of quality – the reliability with which clinicians delivered evidence-based interventions and the safety with which they delivered it. They used data from the national quality improvement initiative, Can Rapid Risk Stratification of Unstable Angina Patients Suppress ADverse Outcomes with Early Implementation of the ACC/AHA Guidelines (CRUSADE), and analyzed the relationship between clinical outcomes and measures of both guideline adherence and safety.
In this 39 291 patient population, median adherence to ACC/AHA guideline-recommended therapies was not perfect, but high (85%), while dosages of antithrombotic agents that were considered in safe ranges were given just 53% of the time. In aggregate, the hospitals that were above average on guideline adherence were above average on every one of the acute and discharge treatment measures (Table 2 of Mehta et al4). Similarly, the hospitals that were below average on safety were more likely to use excessive doses of both heparin and glycoprotein IIb/IIIa inhibitors. Nevertheless, the world was not divided into good and bad performers. As shown in Figure 1 of Mehta et al,4 there was only a loose correlation between performance on one dimension of quality versus the other.
In short, both guideline adherence and safety mattered. There were striking declines in the risk for in-hospital mortality for every 10% improvement in adherence or in safety. The hospitals with above average performance on both had an adjusted 17% reduction in their risk adjusted mortality odds ratio, compared to hospitals that were below average on both. Hospitals that were high on one and low on the other had intermediate risk adjusted mortality rates. Use of safe medication doses also correlated with lower rates of bleeding events.
The data have face validity. Safety and adherence to guidelines both strongly influence mortality, even in a study population that is substantial, but not enormous, in size. As the authors so aptly note in the title and introduction, it is important to do the right thing, and it is important to do them right – and one cannot automatically assume both are happening. There are errors of omission, and errors of commission – and both worsen patient outcomes.
What these findings imply for performance improvement is that it is time to reintegrate the various streams of quality that emerged after the Institute of Medicine reports – reliability in guidelines adherence, safety, patient experience, and efficiency. Even more important, it is time to redefine performance in terms of what we are actually trying to accomplish in health care. We cannot assume that a hospital which is excellent on guideline compliance metrics is also excellent in patient safety. All of these dimensions of quality are separate, yet intertwined, and they combine to determine the outcomes and costs of care. No one is likely to be the best on all dimensions of quality. We have to measure them all, and try to improve them all – and we have to measure and try to improve their ultimate results.
The first major point that emerges from this logic train is that we have to measure actual patient outcomes. Clinicians have been leery of putting too much attention on patient outcomes such as mortality and complications, because of difficulties with risk adjustment and the impact of clinical and socioeconomic factors beyond their control. That said, the real focus of health care is the welfare of patients, not the reliability of providers. And, as demonstrated by Mehta et al,4 hospitals can be superb on one dimension of quality, but be poor on another, and have mediocre outcomes. Knowing that their outcomes are mediocre is likely to make improvement much more compelling for the process measures where the institution is below average.
The second major point is that no one outcome tells the story for any subset of patients; there are multiple outcomes, and the fact is that they should all be measured and reported. Porter has described a hierarchy of outcomes, with Tier 1 outcomes (hard clinical outcomes) being most important.5 But, he argues, Tier 2 outcomes related to the process (eg, readmissions, the disutility of care) matter as well, as do Tier 3 outcomes, which reflect the durability of health interventions (eg, the likelihood of a patient needing a repeat procedure.)
It is crucial that institutions understand when they are statistically worse than expected on Tier 1 outcomes, such as mortality. That should precipitate an all-hands-on-deck effort to dissect whether patient selection, poor guideline adherence, or suboptimal safety are causing the gap. On the other hand, organizations cannot expect to pull away from the crowd on the basis of the hardest Tier 1 outcomes such as mortality, because survival rates tend to be clustered at high levels. Attention should then turn to nonfatal clinical outcomes – what are increasingly known as patient reported outcome measures – and Tier 2 and Tier 3 outcomes.
The third point that emerges from this logic train is that new methods are needed to motivate clinicians to care about these outcomes and process metrics.6 The challenge of engaging hard-working clinicians in improvement of how they work together is complicated by the relentless demands they already face in their work day. That said, if physicians in particular are not engaged in the process of improvement, much of the data collection for quality improvement is pointless. Progress on clinician engagement takes thoughtful use of incentives, both financial and nonfinancial, all as part of the pursuit of a shared purpose that is widely embraced by the health care organization’s providers.
As clinicians organize to take on the overarching task of improving their outcomes, they should note one other finding from Mehta et al.4 In this report, the institutions with both low adherence and low safety were more likely to be smaller hospitals that were less likely to have capabilities for revascularization, and with patients less likely to be treated by a cardiologist. This finding supports concentrating volumes of patients where there is sufficient scale to justify a real team expert in meeting their needs. Consolidation of care is painful because it inevitably means deciding not to deliver it everywhere. Patients may have to travel a half hour further, but the improvement in outcomes will likely offset the delay.
In summary, to take performance in health care to a higher level, the time has arrived to bring together the disparate measurement streams related to quality, and recognize that our work is nothing less than being reliably excellent in all of them. Health care is complex, and there are no 2 or 3 most important functions that define excellence. Therefore, we must be willing to measure actual patient outcomes, and work relentlessly to improve them. The question that providers should ask when they look at outcomes data is not who is the best, but how improvement can be made. And, as they strive to do so, they should then turn to the many types of quality-related data that collectively influence the value of their care.
Dr Lee is Chief Medical Officer for Press Ganey.
The opinions expressed in this article are not necessarily those of the editors or of the American Heart Association.
- © 2015 American Heart Association, Inc.
- Kohn LT,
- Corrigan JM,
- Donaldson MS
- 2.↵Committee on Quality of Health Care in America Crossing the quality chasm. A New Health System for the 21st Century. Institute of Medicine. Washington, DC: National Academic Press, 2001.
- Mehta RH,
- Chen AY,
- Alexander KP,
- Ohman EM,
- Roe MT,
- Peterson ED.
- Lee TH,
- Cosgrove D.