The Clinical Algorithm Nosology

Abstract
Concern regarding the cost and quality of medical care has led to a proliferation of competing clinical practice guidelines. No technique has been described for determining objectively the degree of similarity between alternative guidelines for the same clinical problem. The authors describe the development of the Clinical Algorithm Nosology (CAN), a new method to com pare one form of guideline the clinical algorithm The CAN measures overall design com plexity independent of algorithm content, qualitatively describes the clinical differences be tween two alternative algorithms, and then scores the degree of similarity between them CAN algorithm design-complexity scores correlated highly with clinicians' estimates of com plexity on an ordinal scale (r = 0 86) Five pairs of clinical algorithms addressing three topics (gallstone lithotripsy, thyroid nodule, and smusitis) were selected for interrater reliability testing of the CAN clinical-similarity scoring system Raters categorized the similarity of algorithm pathways in alternative algorithms as "identical," "similar," or "different " Interrater agreement was achieved on 85/109 scores (80%), weighted kappa statistic, k = 0 73. It is concluded that the CAN is a valid method for determining the structural complexity of clinical algorithms, and a reliable method for describing differences and scoring the similarity between algorithms for the same clinical problem In the future, the CAN may serve to evaluate the reliability of algorithm development programs, and to support providers and purchasers in choosing among alternative clinical guidelines. Key words. guidelines; clinical algorithms, reliability; validity; quality assurance. (Med Decis Making 1992;12:123-131)

This publication has 6 references indexed in Scilit: