Stability of clinical reasoning assessment results with the Script Concordance test across two different linguistic, cultural and learning environments

Abstract
The Script Concordance (SC) test is designed to measure the organization of knowledge that allows interpretation of data in clinical reasoning. An originality of the test is that answer keys use an aggregate scoring method based on answers given by a panel of experts. Previous studies have shown that the SC test has good construct validity. This study, done in urology, explores (1) the stability of the construct validity of the test across two different linguistic and learning environments and (2) the effect of the use of experts who belong to different environments. An 80-item SC test was administered to participants from a French and a Canadian university. Two levels of experience were tested: 25 residents in urology (11 from the French university and 14 from the Canadian university) and 23 students (15 from the French faculty, eight from the Canadian faculty). Reliability analysis was studied with Cronbach's alpha coefficient. Scores between groups were compared by analysis of variance. Reliability coefficient of the 80 items test was 0.794 for the French participants and 0.795 for the Canadian participants. Scores increased with clinical experience in urology in the two sites. Candidates obtained higher scores when correction was done using the answer key provided by the experts from the same country. These data support the stability of the construct validity of the tool across different learning environments.