The selection and training of examiners for clinical examinations

Abstract
The inconsistency of the marking in clinical examinations is a well documented problem. This project identified some of the factors responsible for this inconsistency. A standardized rating situation was devised. Five students were videotaped as they performed part of a physical examination on simulated patients. Eighteen experienced medical and surgical examiners rated their performances using an objective checklist type of rating form. No differences were evident between physicians and surgeons. The group of examiners was divided into three subgroups, one receiving no training, one limited training and one more extensive training. Examiners re-rated the same students 2 months after the first rating. Inter-rater reliability was satisfactory for the first ratings and training produced no significant improvement. A substantial improvement was achieved by identifying the most inconsistent raters and removing them from the analysis. Training was shown to be unnecessary for consistent examiners and ineffective for examiners who were less consistent. On the basis of these results, only consistent examiners were selected to take part in the interactive component of the objective structured final year examinations. The ratings in these examinations achieved high levels of inter-rater reliability. It was concluded that the combination of an objective check-list rating form, a controlled test situation and the selection of inherently consistent examiners could solve the problem of inconsistent marking in clinical examinations.