In response to increasing concerns about the prevalence of knowledge- based assessments of medical student competency, leaders in medical education have emphasized the importance of methods that quantify student performance. As a result, the use of objective structured clinical examinations (OSCEs) is viewed by many as the newest and most promising technique for assessing students' abilities. In considering the implementation of a fourth-year OSCE, faculty at the College of Human Medicine at Michigan State University became uncomfortable with some of the technical limitations of the method (limited generalizability; weak linkages to the curriculum; little opportunity provided for improvement in examinees' skills; and others), as well as the possible ramifications of such an innovation within their school's specific curricular and organizational contexts. This essay is offered as a reflection of the challenges and possible alternatives that have emerged as the faculty have considered how best to design and implement performance-based assessment within their institution. Rather than using the OSCE as a milestone marker of student performance, they consider the possibility of smaller assessment events, closely tied to the curriculum and consistent with the guiding principles of the medical school.