Evaluation designs for adequacy, plausibility and probability of public health programme performance and impact

Abstract
The question of why to evaluate a programme is seldom discussed in the literature. The present paper argues that the answer to this question is essential for choosing an appropriate evaluation design. The discussion is centered on summative evaluations of large-scale programme effectiveness, drawing upon examples from the fields of health and nutrition but the findings may be applicable to other subject areas. The main objective of an evaluation is to influence decisions. How complex and precise the evaluation must be depends on who the decision maker is and on what types of decisions will be taken as a consequence of the findings. Different decision makers demand not only different types of information but also vary in their requirements of how informative and precise the findings must be. Both complex and simple evaluations, however, should be equally rigorous in relating the design to the decisions. Based on the types of decisions that may be taken, a framework is proposed for deciding upon appropriate evaluation designs. Its first axis concerns the indicators of interest, whether these refer to provision or utilization of services, coverage or impact measures. The second axis refers to the type of inference to be made, whether this is a statement of adequacy, plausibility or probability. In addition to the above framework, other factors affect the choice of an evaluation design, including the efficacy of the intervention, the field of knowledge, timing and costs. Regarding the latter, decision makers should be made aware that evaluation costs increase rapidly with complexity so that often a compromise must be reached. Examples are given of how to use the two classification axes, as well as these additional factors, for helping decision makers and evaluators translate the need for evaluation--the why--into the appropriate design--the how.