Small Sample Properties of Probit Model Estimators

Abstract
When maximum likelihood estimates of the coefficients in a nonlinear model such as the probit model are obtained there are a number of asymptotically equivalent covariance matrix estimators that can be used. These covariance matrix estimators are typically associated with different computer algorithms. For example, with the Newton–Raphson algorithm the inverse of the negative of the Hessian matrix from the log-likelihood function is used; with the method of scoring the inverse of the information matrix is used; and with a procedure proposed by Berndt, Hall, Hall, and Hausman (1974), the inverse of the outer product of the first derivatives of the log-likelihood function is used. Although these three estimators are asymptotically equivalent, their performance can vary in finite samples. The main objective of this article is to use a Monte Carlo experiment to investigate the finite sample properties of the three covariance matrix estimators, in the context of maximum likelihood estimation of the probit model. Related questions concerning the empirical distributions of test statistics and the properties of a preliminary test estimator are also examined, under varying degrees of multicollinearity. We find that, on average, the Hessian matrix and the information matrix give almost identical results and lead to more accurate estimates of the asymptotic covariance matrix than does the estimator based on first derivatives. The finite sample mean squared error of the maximum likelihood estimator, however, is considerably greater than the asymptotic covariance matrix, and the estimator based on first derivatives provides a better estimate of finite sample mean squared error. All three estimators lead to empirical distributions that can be approximated by an asymptotic normal distribution. The pretest estimator formed by testing for the omission of an explanatory variable is reasonably efficient, but its distribution is seldom well approximated by a normal distribution.