Author + information
- Received August 31, 2016
- Revision received October 31, 2016
- Accepted October 31, 2016
- Published online February 6, 2017.
- Edward L. Hannan, PhDa,∗ (, )
- Ye Zhong, MDa,
- Kimberly Cozzens, MAa,
- Foster Gesten, MDb,
- Marcus Friedrich, MDb,
- Peter B. Berger, MDc,
- Alice K. Jacobs, MDd,
- Gary Walford, MDe,
- Frederick S.K. Ling, MDf,
- Ferdinand J. Venditti, MDg and
- Spencer B. King III, MDh
- aUniversity at Albany, State University of New York, Albany, New York
- bNew York State Department of Health, Albany, New York
- cNorthwell Health, Great Neck, New York
- dBoston Medical Center, Boston, Massachusetts
- eJohns Hopkins University, Baltimore, Maryland
- fUniversity of Rochester Medical Center, Rochester, New York
- gAlbany Medical Center, Albany, New York
- hSt. Joseph’s Health System, Atlanta, Georgia
- ↵∗Address for correspondence:
Dr. Edward L. Hannan, School of Public Health, State University of New York, University at Albany, One University Place, Rensselaer, New York 12144-3456.
Objectives The authors examined the impact of including shock patients in public reporting of percutaneous coronary intervention (PCI) risk-adjusted mortality.
Background There is concern that an unintended consequence of statewide public reporting of medical outcomes is the avoidance of appropriate interventions for high-risk patients.
Methods New York State’s PCI registry was used to compare hospital and physician risk-adjusted mortality rates and outliers from New York’s public report models with rates and outliers based on statistical models that include refractory shock patients and exclude both refractory shock and other shock patients.
Results Correlations between the public report model and each of the other 2 models were above 0.92 for hospital risk-adjusted rates and were 0.99 for all physician risk-adjusted rates (p < 0.0001). There were 11 physicians with lower than expected mortality rates (low outliers) and 41 physicians with higher than expected mortality rates (high outliers) across the 3 time periods in the public report, compared with 10 low outliers and 40 high outliers if all shock patients had been excluded. There was considerable overlap among outliers identified by the 3 models. Findings were similar for hospital outliers.
Conclusions Risk-adjusted hospital and physician mortality rates are highly correlated regardless of whether shock patients are included in public reporting. The numbers of outliers are similar, and outlier changes are minimal, although 10% to 15% of cardiologists who were outliers in either exclusion rule were not outliers in the other one. This information can form a basis for subsequent discussions regarding the exclusion of high-risk patients from public reporting.
Following the public releases of coronary artery bypass graft (CABG) surgery hospital outcomes data in New York in 1989 and Pennsylvania in 1990, several states now release risk-adjusted mortality rates for hospitals, and some release the same data for cardiologists and surgeons for 1 or more cardiac procedures (1–5). Also, the Centers for Medicare & Medicaid Services (CMS) now releases hospital risk-adjusted outcomes for several medical conditions, as well as risk-adjusted mortality and readmissions data for CABG surgery (6).
Despite the fact that public reporting of health outcomes by governmental agencies and private companies has become commonplace in the last 20 years, there has been considerable concern expressed by hospitals and physicians about its detrimental impact. Perhaps the most troublesome of these concerns is that high-risk patients who can benefit from a procedure are being denied access because of the fear of physicians and surgeons that their risk-adjusted mortality rates will be negatively affected (7–16).
The current policy for public reporting of risk-adjusted in-hospital/30-day percutaneous coronary intervention (PCI) mortality rates for hospitals and physicians in New York is to include all patients except patients with cardiogenic refractory shock and a subset of patients with anoxic brain injury. However, some cardiologists have questioned whether these exclusions should be expanded to also include patients with nonrefractory shock, referred to as hemodynamically unstable patients in the New York registry.
The purpose of this study was to examine the impact on risk-adjusted mortality rates and outlier status for hospitals and physicians if each of 2 separate changes were made to the current public report: 1) inclusion of the refractory shock patients who are currently excluded; and 2) exclusion of all shock patients.
The databases used to conduct the study were New York’s Percutaneous Coronary Interventions Reporting System (PCIRS) and New York’s Vital Statistics file. PCIRS was created in 1992 for the purpose of evaluating and improving the quality of PCI in New York through the risk adjustment of outcomes and dissemination of reports to hospitals, cardiologists, and the public. It contains demographics; patient risk factors; complications; hospital and cardiologist identifiers; admission, discharge, and procedure dates; and discharge disposition for all PCI procedures performed in nonfederal hospitals in the state.
Data are audited for completeness and accuracy by matching to New York’s acute care hospital administrative database, SPARCS (Statewide Planning and Research Cooperative system), and by reviewing medical records from hospitals. Records are chosen for review each year from a sample of hospitals based on reported prevalences of patient risk factors, accuracy of previous reporting, and time since last audit.
A total of 60 hospitals performed PCI in the state in 2013. Vital statistics data were matched to PCIRS using unique patient identifiers in order to obtain deaths that occurred after discharge but within 30 days following the index PCI procedure.
For each time period included in the analyses, 3 groups of patients were analyzed after having removed non-U.S. residents (because of inability to follow them after discharge) and patients excluded from New York’s public reports because of anoxic brain injuries (1): 1) all patients undergoing PCI in the time period except refractory shock patients (the data used in New York’s public reports since 2006); 2) all patients undergoing PCI (i.e., refractory shock patients are also included); and 3) all patients undergoing PCI except refractory shock patients and nonrefractory shock patients. Separate analyses were conducted using patients from each of the years 2011 to 2013 for evaluating hospital risk-adjusted in-hospital/30-day mortality rates, and for the years 2009 to 2011 combined, 2010 to 2102 combined and 2011 to 2013 combined for evaluating risk-adjusted in-hospital/30-day mortality rates for physicians. The 3 separate years 2011, 2012, and 2013 were chosen for evaluating hospitals because they were the latest 3 years available with audited data. The 3 overlapping 3-year periods beginning with 2009 to 2011 were chosen because physicians are evaluated using 3 years of data, and these were the 3 latest 3-year periods that were available.
In PCIRS, the data element “refractory shock” is defined as acute hypotension (systolic blood pressure <80 mm Hg) and/or low cardiac index (<2.0 l/min/m2) just before commencement of PCI despite pharmacological or mechanical support. Also, ongoing resuscitation warrants the coding of refractory shock. Nonrefractory shock (called “hemodynamic instability” in PCIRS during the time of this study) is defined as requiring pharmacological or mechanical support to maintain blood pressure or cardiac index at the time of the procedure with evidence of hypotension or low cardiac index before pharmacological or mechanical support. Please note that these definitions differ slightly from the National Cardiovascular Data Registry (NCDR) definition of cardiogenic shock (a sustained [>30 min] episode of systolic blood pressure 90 mm Hg and/or cardiac index <2.2 l/min/m2 and/or pharmacological or mechanical support).
Backward stepwise logistic regression models (with p < 0.05 for removal) were used to predict in-hospital or 30-day mortality (death in the index admission during or after the procedure, or death at any time within 30 days of the index procedure but after discharge from the hospital) using all patient-specific variables in PCIRS as candidates for the models. Models that exclude refractory shock patients, but include all other PCI patients, were already published in public reports for the years 2009 to 2013 (1,17,18). Similar 1-year models (for evaluating hospitals) and 3-year models (for evaluating physicians) were created that: 1) included all patients with refractory shock in addition to all other patients; and 2) excluded all shock patients, but included everyone else. Each of the 1-year models was used to calculate risk-adjusted mortality rates and outliers for hospitals, and each of the 3-year models was used to calculate risk-adjusted rates and outliers for physicians. A high-outlier provider is defined as having a risk-adjusted mortality rate significantly higher than the overall statewide mortality rate, and low-outlier providers are defined as having risk-adjusted rates significantly lower than the statewide rate. Methods for these calculations are provided elsewhere (1).
For each time period and provider type of interest, results from the models were compared by contrasting high and low outliers and by correlating provider risk-adjusted mortality rates for each of the new models with those from the public report model.
The number of refractory shock patients undergoing PCI increased almost 3-fold from 83 (0.15% of all PCI patients) in 2005, the year before excluding refractory patients from public reporting, to 236 (0.56% of all patients) in 2013 (Table 1). During the same time period, the number of other shock patients remained relatively constant, having increased from 251 patients in 2006 to 270 patients in 2013. During that time, the in-hospital/30-day mortality of refractory shock patients undergoing PCI increased initially from 33.7% in 2005 before their exclusion from public reporting to roughly 50% in 2008 and all subsequent years. Mortality rates for nonrefractory shock patients (who have always been included in public reporting) had more variability but no obvious trends.
A total of 60 hospitals performed PCI in New York in 2013. The 2013 statistical model used to evaluate hospitals’ risk-adjusted mortality rates that included refractory shock patients contained all variables contained in the public report model (1) except female sex and diabetes. However, it also contained refractory shock (with an odds ratio [OR] of 18.47, p < 0.0001) and body mass index. The public report model identified 3 low outliers (5% of the total number of hospitals) and 2 high outlier hospitals (3.3% of the total), whereas the model that included refractory shock identified 1 of those low outliers and 1 of those high outliers, but no other outliers (Table 2). As noted in Table 3, the correlation in hospital risk-adjusted mortality rates between this model and the public report model was 0.97 (p < 0.0001).
The 2013 statistical model that excluded all shock patients (refractory shock and nonrefractory shock) did not include 4 variables from the 2013 public report model (nonrefractory shock, female sex, diabetes, 2-vessel disease) and included 1 additional variable (body mass index). That model’s 1 high and 1 low outlier hospitals were identical to those in the model that included all shock patients (Table 2). The correlation in hospital risk-adjusted mortality rates between this model and the public report model was 0.96 (p < 0.0001) (Table 3).
The same exercise was then conducted for years 2012 and 2011 (Tables 2 and 3). For 2012, the public report model yielded 1 low outlier and 2 high outliers. For the model with refractory shock patients included, there were no low outliers and 3 high outliers, 2 of which were the 2 outliers in the public report model (Table 2). The model with all shock patients excluded yielded no low outliers and the same 2 high outliers as the public report model. In 2011, all 3 models had the same 2 high outliers, and 2 of the 3 low outliers in the pubic report model were also outliers in the model with refractory shock included. The model without any shock patients had no low outliers. For both 2011 and 2012, the correlations between the hospital risk-adjusted mortality rates in the public report and each of the other models ranged from 0.92 to 0.95 and were all highly significant.
Physician risk-adjusted rates were evaluated in public report models based on 3 years instead of 1 (to accumulate more patients for each physician). In 2011 to 2013, a total of 400 physicians performed PCI in the state. There were 4 low outliers (1%) and 14 high outliers (3.5%) among the 400 physicians in the public report (1). Three of those 4 low outliers (and no other physicians) were designated as low outliers in the model with refractory shock patients included (Table 4). Eleven of the public report model’s 14 high outlier physicians (and one other physician) were designated as high outliers by the model that included refractory shock patients. For the model with all shock patients excluded, the same 3 physicians that were identified as low outliers in the model with refractory shock (and 3 of the 4 identified in the public report model) were high outliers (Table 4). Eleven of the 14 high outlier physicians that were in common between the report model and the model with refractory shock were identified as high outliers by the model that excluded all shock patients. An additional physician not flagged in the other model was also identified as a high outlier in the model with all shock patients excluded. Table 3 notes that correlations in physician risk-adjusted mortality rates between the public report model for 2011 to 2013 and the other two 2011 to 2013 models are extremely high (0.99 and 0.99) and highly significant (p < 0.001).
Physician outliers for the 3-year periods 2010 to 2012 and 2009 to 2011 are also presented in Table 4. As indicated, in 2010 to 2012, the high outliers and low outliers in the public report model differed from the corresponding outliers in the other models by only 1 physician in each case. In 2009 to 2011, the correspondence was not as good, but there was still considerable commonality, and the numbers of outliers in each model were quite close. All physician risk-adjusted mortality rate correlations between the public report model and the other 2 models were extremely high and significant for the 2010 to 2012 and the 2009 to 2011 models (all correlations equal to 0.99 with p < 0.0001) (Table 3).
For more than 25 years, New York and Pennsylvania have been releasing data on risk-adjusted mortality rates for cardiac procedures, and for many years, similar cardiac outcomes data have also been released by Massachusetts, California, and New Jersey (1–5). The movement toward public release of risk-adjusted outcomes by governmental agencies is also evident based on CMS public reports on several medical conditions as well as CABG surgery (6).
However, considerable concern has been raised about the unintended consequences of these releases, particularly with regard to the possible avoidance of high-risk patients who may have benefitted from treatment (7–14). Evidence of this concern is contained in a survey of New York State physicians by Narins et al. (12) that found that 83% of the respondents thought that patients who might benefit from PCI may not receive the procedure as a result of public reporting of physician-specific patient mortality rates, and 85% of respondents felt that the risk-adjusted model for PCI “is not sufficient to avoid punishing physicians who perform high-risk interventions.”
A few studies, most of them very recent, found that acute myocardial infarction (AMI) patients or AMI patients with shock receive PCI less frequently in public reporting states than in comparison states that do not have public reporting of PCI outcomes (10,13–16). In some studies, lower PCI rates for these high-risk patients were associated with higher mortality rates in New York (10,14). For example, Waldo et al. (14) found that in comparison to 5 other eastern states, public reporting states New York and Massachusetts performed fewer PCIs per capita for AMI patients (OR: 0.81; 95% confidence interval [CI]: 0.67 to 0.96) and had higher adjusted in-hospital mortality (adjusted OR: 1.30; 95% CI: 1.13 to 1.50). In other studies, New York’s mortality rate was lower or not significantly different than in the comparator states (13,15,16). For example, McCabe et al. (16) found that rates of PCI for patients with AMI and shock remained lower in New York in relation to comparator states without public reporting even after New York discontinued public reporting for patients with refractory shock. However, New York patients had a lower in-hospital mortality rate (35.5% vs. 38.7%, p < 0.001).
Other studies have arrived at different conclusions. In a study using Medicare data from 1994 to 1999, Hannan et al. (19) found that New York CABG surgery patients had a significantly higher prevalence of AMI, age ≥80 years, emergency admissions, and females than the remainder of the United States. Also, out-of-state referrals were lower than in the remainder of the country, so there was no evidence of higher outmigration of patients in New York.
There is also evidence that higher-risk patients do not detrimentally affect hospitals’ quality ratings. In a study that tested the impact of high-risk cases on hospital quality ratings for PCI, Sherwood et al. (20) used the NCDR to examine the calibration of the Version 4 NCDR risk-adjusted mortality model and to compare the performance of hospitals treating the highest-risk PCI patients versus hospitals treating lower-risk PCI cases. They found that the observed/expected ratio for the highest risk quintile was actually better (0.91; 95% CI: 0.87 to 0.96) than that of the lowest-risk quintile of hospitals (1.10, 95% CI: 1.03 to 1.17). The authors conclude that there is no evidence that treating high-risk PCI patients adversely affects hospital risk-adjusted mortality rates. However, there remains the possibility that hospitals willing to treat the highest risk cases are higher quality hospitals, and this is the reason why they have superior risk-adjusted mortality rates (21). Also, the results of the Narins study (12) mentioned in the preceding text suggest that the main concern that may lead to the avoidance of beneficial high-risk cases is the publication of physician-level data, so it is important to investigate the impact of high-risk cases on physician risk-adjusted rates as well as on hospital risk-adjusted rates (12).
The purpose of our study was to examine differences in hospital and physician risk-adjusted mortality rates and outliers for PCI between the rates and outliers publicly reported in the years 2011 to 2013 in New York and the rates and outliers that would have resulted from 2 changes in the current policy of excluding refractory shock patients from public reporting. The first change was to include refractory shock patients who are currently excluded from public reporting in New York in addition to all other shock patients, and the second change was to exclude all shock patients (i.e., nonrefractory shock patients as well as refractory shock patients) from public reporting.
It was of interest to explore changes resulting from statistical models that include refractory shock patients because we found that since the exclusion of these patients from public reporting, there has been a large increase in the number of these patients undergoing PCI and a large increase in the mortality rate of these patients. This suggests that the group of refractory shock patients receiving PCI since the exclusion of these patients from public reporting is of higher risk on average, and that these highest-risk refractory shock patients would be ones who physicians most feared would adversely affect their outcomes in public reporting. Despite our examination of the hypothetical impact of public reporting when these patients are included, it should be noted that the Department of Health is not currently considering including refractory shock patients in future public reporting. We also examined the exclusion of patients with lesser degrees of shock in addition to refractory shock patients because some physicians in New York have continued to express concern about the detrimental impact of performing procedures on these higher-risk patients on physicians’ risk-adjusted mortality rates for their PCI patients.
We examined the exclusion of nonrefractory shock patients from public reporting because although they are not at as high risk as refractory shock patients, they have mortality rates considerably higher than nonshock patients, and are in fact not distinguished from refractory shock patients in many databases, including the NCDR.
We found that there was a very high correspondence between hospital risk-adjusted mortality rates based on the public report models in each of 3 different years, and the risk-adjusted rates based on a model that included refractory shock patients as well as a model that excluded all shock patients. Correlations ranged from 0.92 to 0.97 for hospital risk-adjusted rates and were all highly significant (p < 0.0001). Six hospitals in total were low outliers (had a lower than expected mortality) in the public reports across 3 separate years, compared with 4 hospitals if refractory shock patients had been included and 3 hospitals if all shock patients had been excluded. Six hospitals in total were also high outliers in the 3 public report models, compared with 6 hospitals if refractory shock patients had been included and 5 hospitals if all shock patients had been excluded. Also, there was considerable overlap in outlier identification. Because we assume more low outliers and fewer high outliers would be desirable to providers, the pubic report would appear to be slightly more desirable than the alternatives with regard to low outliers and about the same with regard to high outliers.
For physician outliers, there were a total of 11 low outliers and 41 high outliers across the 3 time periods in the public report, compared with 8 low outliers and 37 high outliers if refractory shock patients had been included, and 10 low outliers and 40 high outliers if all shock patients had been excluded. Again, there was considerable overlap among outliers identified by the models. A comparison of the public report high outliers with the high outliers if no shock patients were reported shows that 6 of 41 (14.6%) high outliers in public reports were not high outliers in a model without shock patients, whereas 5 of 40 (12.5%) high outliers in the model without shock patients were not high outliers in the public report. A similar comparison of a report with no shock patients versus a report with all shock patients yields 8 of 40 (20%) and 5 of 37 (13.5%). Thus, the total number of outliers in the models were very similar, but there was some variation with regard to who the outliers were. Correlations in risk-adjusted rates between the public report model and the other models were all extremely high (0.99; p < 0.0001).
First, we used 3 different years to examine correspondence of risk-adjusted hospital mortality rates among the different models, and 3 different 3-year periods to do the same for physician mortality rates. It is possible that the correspondences would not be as close in future years, although it is remarkable how similar they were across the periods that were used.
Second, the provider risk-adjusted mortality rates and outliers in the public model may have been affected by risk avoidance due to the fear of public reporting. If so, it is possible that without this risk avoidance, the correspondence between the public reporting model and the other models would not have been as strong.
A few recent studies have demonstrated that the percentage of AMI patients and AMI patients with shock who undergo PCI is lower in New York and other public reporting states than in other states. This suggests that there is a fear among physicians in New York that their ratings in public reports are adversely affected by AMI patients with shock (in this case, nonrefractory shock because refractory shock patients are already excluded from public reporting). We simulated what hospital and physician risk-adjusted mortality rates and outlier status would have been if all shock patients were excluded from public reporting and compared these rates and outliers with their counterparts in the most recent public reports. Findings were that the risk-adjusted rates were very highly correlated, and the numbers of outliers were very similar, although between 10% and 15% of cardiologists who were outliers in 1 of the 2 exclusion rules were not outliers in the other one. This is important information that can serve as a basis for future discussions regarding patients who should be excluded from public reporting.
WHAT IS KNOWN? A few studies show that PCI rates for AMI and shock patients are lower in states with public reporting of PCI outcomes. This implies that cardiologists are fearful of the impact of high-risk cases on their reported risk-adjusted mortality rates and outlier status.
WHAT IS NEW? This study examined changes that would occur in hospital and cardiologist risk-adjusted mortality and outlier status if all shock patients were excluded from public reporting. Findings were that the risk-adjusted rates were very highly correlated, and the numbers of outliers were very similar, although between 10% and 15% of cardiologists who were outliers in one of the 2 exclusion rules were not outliers in the other one.
WHAT IS NEXT? This information can help form a basis for subsequent discussions regarding the exclusion of high-risk patients from public reporting.
The authors would like to thank New York State's Cardiac Advisory Committee (CAC) members for their encouragement and support of this study; and Cynthia Johnson and the cardiac catheterization laboratories of the participating hospitals for their tireless efforts to ensure the timeliness, completeness and accuracy of the data submitted.
Dr. Berger is a consultant for a National Quality Initiative not related to this study. Dr. Jacobs is a site principal investigator for Abbott Vascular and AstraZeneca. Dr. King is a member of the Data Safety Monitoring Board for Harvard Clinical Research Institute, Duke University, Capicor, Inc., Merck & Co, and Stentys. All other authors have reported that they have no relationships relevant to the contents of this paper to disclose.
- Abbreviations and Acronyms
- acute myocardial infarction
- coronary artery bypass graft
- confidence interval
- National Cardiovascular Data Registry
- odds ratio
- percutaneous coronary intervention
- Percutaneous Coronary Interventions Reporting System
- Received August 31, 2016.
- Revision received October 31, 2016.
- Accepted October 31, 2016.
- American College of Cardiology Foundation
- New York State Department of Health
- ↵Pennsylvania Health Care Cost Containment Council. Cardiac Surgery in Pennsylvania: Information About Hospitals and Cardiac Surgeons: Data July 1, 2011 to December 31, 2012. November 2013. Available at: http://www.phc4.org/reports/cabg/12/docs/cabg2012report.pdf. Accessed June 20, 2016.
- ↵California Office of Statewide Health Planning and Development. Coronary Artery Bypass Graft (CABG) Surgery in California, 2013. Available at: http://www.oshpd.ca.gov/HID/Products/Clinical_Data/CABG/13Breakdown.html. Accessed June 20, 2016.
- ↵Massachusetts Data Analysis Center. Adult Percutaneous Coronary Intervention in the Commonwealth of Massachusetts: Fiscal Year 2013 Report: Hospital Risk-Standardized In-Hospital Mortality Rates. October 2015. Available at: http://www.massdac.org/wp-content/uploads/PCI-FY2013.pdf. Accessed June 20, 2016.
- ↵New Jersey Department of Health and Senior Services. Cardiac Surgery in New Jersey: 2011. Health Care Quality Assessment, Office of Policy and Strategic Planning. December 2015. Available at: http://nj.gov/health/healthcarequality/documents/cardconsumer15.pdf. Accessed June 20, 2016.
- ↵Centers for Medicare & Medicaid Services. Measure Methodology. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Measure-Methodology.html. Accessed June 16, 2016.
- Moscucci M.,
- Eagle K.A.,
- Share D.,
- et al.
- Dranove D.,
- Kessler D.,
- McClellan M.,
- Satterthwaite M.
- Resnic F.S.,
- Welt F.G.
- Apolito R.A.,
- Greenberg M.A.,
- Menegus M.A.,
- et al.
- Omoigui N.A.,
- Miller D.P.,
- Brown K.J.,
- et al.
- Waldo S.W.,
- McCabe J.M.,
- O’Brien C.,
- et al.
- Bangalore S.,
- Guo Y.,
- Xu J.,
- et al.
- McCabe J.M.,
- Waldo S.W.,
- Kennedy K.F.,
- Yeh R.W.
- ↵New York State Department of Health. Percutaneous Coronary Interventions in New York State: 2010-2012. October 2015. Available at: http://www.health.ny.gov/statistics/diseases/cardiovascular/docs/pci_2010-2012.pdf. Accessed June 16, 2016.
- ↵New York State Department of Health. Percutaneous Coronary Interventions (PCI) in New York State: 2009-2011. March 2014. Available at: http://www.health.ny.gov/statistics/diseases/cardiovascular/docs/pci_2009-2011.pdf. Accessed June 16, 2016.
- Hannan E.L.,
- Sarrazin M.S.V.,
- Doran D.R.,
- Rosenthal G.E.
- Sherwood M.W.,
- Brennan J.M.,
- Ho K.K.,
- et al.
- Hannan E.L.