The Evaluation of Broad-Aim Programs: A Cautionary Case and a Moral

Abstract
It is often assumed that the ideal study-design for evaluation of the effectiveness of a social program would be a controlled experiment. A case study is presented of an evaluation-research project which utilized such a design. The project encountered both technical difficulties and intraorganizational friction, which, it is argued, are virtually inherent in the utilization of an experimental design for the appraisal of the effects of a broad-aim, largely unstandardized, and inadequately replicated action-program. Among the technical difficulties are problems in the identification of criteria, problems having to do with the exclusion of alien variables, problems associated with the changing form of the intervention, and problems associated with the limitations of the experimental form as a source of new knowledge. Among the sources of intraorganizational friction are the constraints on stimulus-modification imposed by the experimental commitment, the tendency of operationalizations of aims to become aims in their own right, and the comparative ignorance regarding what the action-program is doing on the part of a research group which is concentrating on the collection of baseline data. A plea is made for evaluation which would be more qualitative and process-oriented.