Abstract
Summarizing theory and results of empirical research, this article serves to illustrate why effects measured with retrospective pretests may be subject to bias and may not always be explained by response shift theory. It presents three contending theories to explain the difference between retrospective and traditional pretest results and considers how the evaluation environment may inform subject bias. Four recommendations are made for workforce education (WE) researchers and practitioners who employ retrospective pretest data to report programme outcomes. WE professionals should (a) consider the cognitive implications of tasking participants to recall information, (b) select a robust evaluation design to encompass the retrospective pretest, (c) provide validity evidence of retrospective pretest data, and (d) conduct additional research to evaluate how elements of the evaluation process moderate retrospective assessments.