Author + information
- James G. Jollis, MD, FACC⁎ ()
- ↵⁎Reprint requests and correspondence:
Dr. James G. Jollis, Box 3254, Duke University Medical Center, Durham, North Carolina 27710
- percutaneous coronary transluminal angioplasty
- hospital mortality
- outcome assessment (health care)
- quality of health care
- myocardial infarction
For those who perform coronary revascularization every day, the decision for angioplasty seems as natural and well considered as a decision to don an umbrella in the rain. From the perspective of patients and their families initially faced with the prospect of coronary artery disease and the need for coronary interventions, this decision is much less certain and often invokes questions, fears, and doubts. Topping the list of questions are: “Do I really need this?” “Is this the best hospital and medical team to perform this procedure?”
As a reflection of the exceptional group of colleagues who compose cardiology, extensive tools and systems have been developed to help inform these concerns. Regarding the question of whether to perform coronary revascularization, decades of randomized trials and pages of expert guidelines provide clear direction regarding the benefits, risks, and indications for intervention. To address the question “Is this the best facility?,” cardiology has been far out in front of the medical field in establishing national registries by which to identify best care.
In this issue of JACC: Cardiovascular Interventions, Klein et al. (1) consider the ability of the ACC-NCDR (American College of Cardiology–National Cardiovascular Data Registry) database to identify quality care according to risk-adjusted mortality rates. Following a hypothesis put forth by Luft and Romano (2) that hospital outlier status should be related to future performance, this study examines changes in institutional rankings over 4 years according to risk-adjusted mortality. From a practical standpoint, if better or worse quality can be identified according to risk-adjusted mortality, one would expect to find temporal relationships. For those hospitals that did not implement significant programmatic changes, “high” or “low” status should be consistently identified in contiguous years.
The longitudinal findings of the study are slightly challenging to interpret from the perspective of temporal trends, as only 180 of 403 hospitals included in the analyses participated all 4 years. Annual enrollment in NCDR increased from 228 to 339 hospitals between 2001 and 2004. For programs participating in NCDR 2 or more years, rankings varied considerably by year. Not surprisingly, the programs with the fewest number of annual cases exhibited the greatest variation, an average of 53 places between years.
Considering quality according to the highest risk-adjusted mortality rates, 64 hospitals ranked among the top 20 institutions in any given year, 8 in 2 years, and 3 in 3 or more years. Thus, most hospitals were unlikely to be identified as high outliers on a consistent basis according to risk-adjusted mortality.
What is the significance of these findings? First, this study confirms the work of Luft and Romano (2) that among low risk patients, outlier status is not predictive of subsequent hospital performance. With coronary angioplasty mortality rates approaching 1%, there are simply too few adverse outcomes to reliably identify hospitals that attain better or worse outcomes according to consistent yearly trends. Second, this work provides empiric evidence regarding the challenge of identifying “quality low volume hospitals.” Rankings that vary by an average of 53 places between years suggest that chance plays a substantial role in model estimates when dealing with small samples and infrequent outcomes.
In considering model performance, the work refers to C-indices of 0.9, suggesting “quite high” discrimination and “good confidence in the estimated expected probabilities for calculation of the O/E ratio” (1). With such high C-indices, how is it possible to find such variation in rankings? The C-index is a single measure of model performance and its significance should not be overestimated. This index simply represents the proportion of all possible pairs of patients with different outcomes (1 survivor and 1 death) for whom the regression model assigns higher risk to patients who died. The index does not take into account the magnitude of this difference. For most pairs, the models will estimate that both will survive, yet the C-index considers the assignment of a slightly higher risk as “discriminating.” A high C-index should not be confused with the ability to predict which patients will die. To directly consider the “estimated expected probabilities,” one must examine model calibration rather than the C-index, or the extent to which predicted mortality corresponds to observed mortality across the spectrum of risk.
The C-index and model calibration are 2 of many considerations of our ability to risk adjust and identify best care. Risk adjustment efforts should also be gauged according to the consistency of risk factor and outcomes reporting across hospitals, event rates, sample size, patient selection, and longitudinal correlations of hospital rank such as that conducted by Klein et al. (1). With rankings that can change by 50 places in a given year, systematic assurance of uniform risk factor and complication reporting is of particular importance.
With continued refinement of our techniques including analyses such as these, cardiology has substantial resources by which to uphold the quality of coronary revascularization and definitively address patient concerns. However, coronary angioplasty deaths are so infrequent that risk-adjusted models alone will remain limited in their ability to fully characterize quality care. Extending the weather analogy, there are simply too few storms in cities such as Palm Springs to statistically judge rain forecasting skills, and too few deaths in coronary interventions to reliably identify quality programs, particularly among low volume hospitals. Thus, coronary interventional quality efforts must continue to wield a broad umbrella of multiple metrics including training and practice standards, process measurements, and risk-adjusted outcomes.
Dr. Jollis has received grant support from United Healthcare, Genentech, and Sanofi Aventis.
↵⁎ Editorials published in JACC: Cardiovascular Interventions reflect the views of the authors and do not necessarily represent the views of JACC: Cardiovascular Interventions or the American College of Cardiology.
- American College of Cardiology Foundation