Assessing the reliability of standardized performance indicators

Abstract
Objectives. To investigate the reliability of self-reported standardized performance indicators introduced by the Joint Commission on Accreditation of Healthcare Organizations in July 2002 and implemented in approximately 3400 accredited US hospitals. The study sought to identify the most common data quality problems and determine causes and possible strategies for resolution. Design. Data were independently reabstracted from a random sample of 30 hospitals. Reabstracted data were compared with data originally abstracted, and discrepancies were adjudicated with hospital staff. Structured interviews were used to probe possible reasons for abstraction discrepancies. Results. The mean data element agreement rate for the 61 data elements evaluated was 91.9%, and the mean kappa statistic for binary data elements was 0.68. The rate of agreement for individual data elements ranged from 100 to 62.4%. The mean difference between calculated indicator rates was 4.88% (absolute value) and the range of differences was 0.0–13.3%. Symmetry of disagreement among original abstractors and reabstractors identified eight indicators whose differences in calculated rates were statistically unlikely to have occurred through random chance (P < 0.05). Conclusion. Although improvement in the accuracy and completeness of the self-reported data is possible and desirable, the baseline level of data reliability appears to be acceptable for indicators used to assess and improve hospital performance on selected clinical topics.