The Effect of Candidatesʼ Perceptions of the Evaluation Method on Reliability of Checklist and Global Rating Scores in an Objective Structured Clinical Examination
- 1 July 2002
- journal article
- clinical trial
- Published by Wolters Kluwer Health in Academic Medicine
- Vol. 77 (7), 725-728
- https://doi.org/10.1097/00001888-200207000-00018
Abstract
Process-oriented global ratings, which assess "overall performance" on one or a number of domains, have been purported to capture nuances of expert performance better than checklists. Pilot data indicate that students change behaviors depending on their perceptions of how they are being scored, while experts do not. This study examines the impact of the students' orientation to the rating system on OSCE scores and the interstation reliability of the checklist and global scores. A total of 57 third- and fourth-year medical students at one school were randomly assigned to two groups and performed a ten-station OSCE. Group 1 was told that scores were based on checklists. Group 2 was informed that performance would be rated using global ratings geared toward assessing overall competence. All candidates were scored by physician-examiners who were unaware of the students' orientations to the rating system and who used both checklists and global rating forms. A mixed two-factor ANOVA identified a significant interaction of rating form by group (F(1,55) = 5.5, p <.05), with Group 1 (checklist-oriented) having higher checklist scores but lower global scores than did Group 2 (oriented to global ratings). In addition, Group 1 had higher interstation alpha coefficients than did Group 2 for both global scores (0.74 versus 0.63) and checklist scores (0.63 versus 0.40). The interaction effect on total exam scores suggests that students adapt their behaviors to the system of evaluation. However, the lower reliability coefficients for both forms found in the process-oriented global-rating group suggest that an individual's capacity to adapt to the system of global rating forms is relatively station-specific, possibly depending on his or her expertise in the domain represented in each station.Keywords
This publication has 7 references indexed in Scilit:
- Assessing the generalizability of OSCE measures across content domainsAcademic Medicine, 1999
- OSCE checklists do not capture increasing levels of expertiseAcademic Medicine, 1999
- Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format examinationAcademic Medicine, 1998
- Pitfalls in the pursuit of objectivity: issues of validity, efficiency and acceptabilityMedical Education, 1991
- Pitfalls in the pursuit of objectivity: issues of reliabilityMedical Education, 1991
- Assessment of clinical skills with standardized patients: State of the artTeaching and Learning in Medicine, 1990
- Knowledge and clinical problem-solvingMedical Education, 1985