The Risks of Risk Adjustment
- 19 November 1997
- journal article
- research article
- Published by American Medical Association (AMA)
- Vol. 278 (19), 1600-1607
- https://doi.org/10.1001/jama.1997.03550190064046
Abstract
Context. —Risk adjustment is essential before comparing patient outcomes across hospitals. Hospital report cards around the country use different risk adjustment methods. Objectives. —To examine the history and current practices of risk adjusting hospital death rates and consider the implications for using risk-adjusted mortality comparisons to assess quality. Data Sources and Study Selection. —This article examines severity measures used in states and regions to produce comparisons of risk-adjusted hospital death rates. Detailed results are presented from a study comparing current commercial severity measures using a single database. It included adults admitted for acute myocardial infarction (n=11 880), coronary artery bypass graft surgery (n=7765), pneumonia (n=18016), and stroke (n=9407). Logistic regressions within each condition predicted in-hospital death using severity scores. Odds ratios for in-hospital death were compared across pairs of severity measures. For each hospital, z scores compared actual and expected death rates. Results. —The severity measure called Disease Staging had the highest c statistic (which measures how well a severity measure discriminates between patients who lived and those who died) for acute myocardial infarction, 0.86; the measure called All Patient Refined Diagnosis Related Groups had the highest for coronary artery bypass graft surgery, 0.83; and the measure, MedisGroups, had the highest for pneumonia, 0.85 and stroke, 0.87. Different severity measures predicted different probabilities of death for many patients. Severity measures frequently disagreed about which hospitals had particularly low or high z scores. Agreement in identifying low- and high-mortality hospitals between severity-adjusted and unadjusted death rates was often better than agreement between severity measures. Conclusions. —Severity does not explain differences in death rates across hospitals. Different severity measures frequently produce different impressions about relative hospital performance. Severity-adjusted mortality rates alone are unlikely to isolate quality differences across hospitals.Keywords
This publication has 27 references indexed in Scilit:
- Benefits and Hazards of Reporting Medical Outcomes PubliclyNew England Journal of Medicine, 1996
- The California Hospital Outcomes Project: Using Administrative Data to Compare Hospital PerformanceThe Joint Commission Journal on Quality Improvement, 1995
- Approaches to predictive modelingThe Annals of Thoracic Surgery, 1994
- Cleveland Health Quality Choice: A Model for Collaborative Community-Based Outcomes AssessmentThe Joint Commission Journal on Quality Improvement, 1994
- Widespread Assessment of Risk-Adjusted Outcomes: Lessons from Local InitiativesThe Joint Commission Journal on Quality Improvement, 1994
- A Description and Clinical Assessment of the Computerized Severity Index™QRB - Quality Review Bulletin, 1992
- The Relationship Between Severity of Illness and Hospital Length of Stay and MortalityMedical Care, 1991
- Adult open heart surgery in New York State. An analysis of risk factors and hospital mortality ratesJAMA, 1990
- Dimensions of RiskRisk Analysis, 1989
- Staging of disease. A case-mix measurementPublished by American Medical Association (AMA) ,1984