Power and Measures of Effect Size in Analysis of Variance With Fixed Versus Random Nested Factors.

Abstract
Ignoring a nested factor can influence the validity of statistical decisions about treatment effectiveness. Previous discussions have centered on consequences of ignoring nested factors versus treating them as random factors on Type I errors and measures of effect size (B. E. Wampold & R. C. Serlin). The authors (a) discuss circumstances under which the treatment of nested provider effects as fixed as opposed to random is appropriate; (b) present 2 formulas for the correct estimation of effect sizes when nested factors are fixed; (c) present the results of Monte Carlo simulations of the consequences of treating providers as fixed versus random on effect size estimates, Type I error rates, and power; and (d) discuss implications of mistaken considerations of provider effects for the study of differential treatment effects in psychotherapy research.