Letter by Payne and Webb Regarding Article, “Agreement Among Cardiovascular Disease Risk Calculators”
To the Editor:
Allan et al1 recently compared a selection of calculators designed to provide estimates of cardiovascular risk, finding considerable differences in the absolute levels of estimated risk and discrepancies in risk categorization. Unfortunately, the authors do not explore why such differences exist, nor do they examine how variations in the way in which risk is presented may influence patient (or indeed clinician) behavior.
That considerable discrepancies exist between calculators is probably unsurprising, accounted for by disparities in the clinical end points and time frames of interest, variations in the underlying patient populations, use of different mathematical models and risk factors, and differences in the manner in which the underlying calculations are implemented in practice. These explanations are not explicitly examined by in the article, although it is worthwhile examining some of the comparisons reported in more detail. For example, 4 of the 13 different 10-year CVD calculators all use the same underlying Framingham-based algorithm2 (Edinburgh BNF, Primary CVD Risk Calculator, JBS Assessor, and JBS Risk Charts). The agreement between the first 3 is ≥91%, although it is lower for the paper JBS Risk Charts (75% to 78%), underlining the impact of simply using different implementations of the same underlying risk equation. Furthermore, 4 of the algorithms are provided by a single calculator (our own Edinburgh Cardiovascular Risk Calculator, cvrisk.mvm.ed.ac.uk), with agreement ranging from 59% to 86%, highlighting the effect of different algorithms, populations, and end points, respectively, independent of mode of implementation. Of particular interest, the lowest degree of concordance observed between the Edinburgh calculations (59%) is actually for 2 different outcomes (coronary versus cardiovascular disease) based on exactly the same underlying population.
It should also be remembered that such calculators are tools used in discussion between clinicians and patients. Decision-making varies not just in relation to the numbers, but is also influenced by the way in which risk is presented.3,4 Patients’ understanding of risk may be poor, with a lack of familiarity around concepts such as relative or absolute risk. Decisions are influenced more strongly by emotions rather than facts, and the framing of risk by the clinician may have a strong bearing on patients’ choices. Visual presentations of risk may aid comprehension, and simpler methods of communicating risk may be more effective for motivating behavior change. All these factors mean that the manner in which these calculators are used in practice is far more likely to affect clinical management than simply considering a single numeric value.
So although we applaud the authors for a helpful comparison of a range of tools, we suggest that the analysis is perhaps overly simplistic. Further research is essential not only to help clinicians decide on which clinical outcome, underlying population, and risk algorithm used by a particular calculator is likely to provide the most accurate or relevant estimate of risk, but to better understand how the manner in which risk is presented by these calculators can be best communicated with patients to facilitate informed decision-making.
Rupert A. Payne, PhD, MRCP, MRCGP
Cambridge Centre for Health Services Research
University of Cambridge
David J. Webb, DSc, FRCP, FRSE, FMedSci
Centre for Cardiovascular Science
University of Edinburgh
- © 2013 American Heart Association, Inc.