Abstract
Behavioral geneticists commonly parameterize a genetic or environmental covariance matrix as the product of a lower diagonal matrix postmultiplied by its transpose—a technique commonly referred to as “fitting a Cholesky.” Here, simulations demonstrate that this procedure is sometimes valid, but at other times: (1) may not produce fit statistics that are distributed as a χ2; or (2) if the distribution of the fit statistic is χ2, then the degrees of freedom (df) are not always the difference between the number of parameters in the general model less the number of parameters in a constrained model. It is hypothesized that the problem is related to the fact that the Cholesky parameterization requires that the covariance matrix formed by the product be either positive definite or singular. Even though a population covariance matrix may be positive definite, the combination of sampling error and the derived—as opposed to directly observed—nature of genetic and environmental matrices allow matrices that are negative (semi) definite. When this occurs, fitting a Cholesky constrains the numerical area of search and compromises the maximum likelihood theory currently used in behavioral genetics. Until the reasons for this phenomenon are understood and satisfactory solutions are developed, those who fit Cholesky matrices face the burden of demonstrating the validity of their fit statistics and the df for model comparisons. An interim remedy is proposed—fit an unconstrained model and a Cholesky model, and if the two differ, then report the difference in fit statistics and parameter estimates. Cholesky problems are a matter of degree, not of kind. Thus, some Cholesky solutions will differ trivially from the unconstrained solutions, and the importance of the problems must be assessed by how often the two lead to different substantive interpretation of the results. If followed, the proposed interim remedy will develop a body of empirical data to assess the extent to which Cholesky problems are important substantive issues versus statistical curiosities.