Extending the Reach of Randomized Social Experiments: New Directions in Evaluations of American Welfare-to-Work and Employment Initiatives
- 1 February 2002
- journal article
- Published by Oxford University Press (OUP) in Journal of the Royal Statistical Society Series A: Statistics in Society
- Vol. 165 (1), 13-30
- https://doi.org/10.1111/1467-985x.0asp4
Abstract
Summary: Random assignment experiments are widely used in the USA to test the effectiveness of new social interventions. This paper discusses several major welfare-to-work experiments, highlighting their evolution from simple ‘black box’ tests of single interventions to multigroup designs used to compare alternative interventions or to isolate the effects of components of an intervention. The paper also discusses new efforts to combine experimental and non-experimental analyses to test underlying programme theories and to maximize the knowledge gained about the effectiveness of social programmes. Researchers and policy makers in other countries may find this variety of approaches useful to consider as they debate an expanded role for social experiments.Keywords
This publication has 14 references indexed in Scilit:
- Modelling Impact HeterogeneityJournal of the Royal Statistical Society Series A: Statistics in Society, 2002
- Using Cluster Random Assignment to Measure Program ImpactsEvaluation Review, 1999
- Measuring Program Impacts on Earnings and Employment: Do Unemployment Insurance Wage Reports from Employers Agree with Surveys of Individuals?Journal of Labor Economics, 1999
- Does Mandatory Basic Education Improve Achievement Test Scores of Afdc Recipients?Evaluation Review, 1997
- Statistical analysis and optimal design for cluster randomized trials.Psychological Methods, 1997
- Statistical analysis and optimal design for cluster randomized trials.Psychological Methods, 1997
- Enforcing a Participation Mandate in a Welfare-to-Work ProgramSocial Service Review, 1996
- Understanding Best Practices for Operating Welfare-To-Work ProgramsEvaluation Review, 1996
- Accounting for No-Shows in Experimental Evaluation DesignsEvaluation Review, 1984
- Evaluating With SenseEvaluation Review, 1983