Geometric Ergodicity of van Dyk and Meng's Algorithm for the Multivariate Student'stModel

Abstract
Let π denote the posterior distribution that results when a random sample of size n from a d-dimensional location-scale Student's t distribution (with ν degrees of freedom) is combined with the standard noninformative prior. van Dyk and Meng developed an efficient Markov chain Monte Carlo (MCMC) algorithm for sampling from π and provided considerable empirical evidence to suggest that their algorithm converges to stationarity much faster than the standard data augmentation algorithm. In addition to its practical importance, this algorithm is interesting from a theoretical standpoint because it is based upon a Markov chain that is not positive recurrent. In this article, we formally analyze the relevant sub-Markov chain underlying van Dyk and Meng's algorithm. In particular, we establish drift and minorization conditions that show that, for many (d, ν, n) triples, the sub-Markov chain is geometrically ergodic. This is the first general, rigorous analysis of an MCMC algorithm based upon a nonpositive recurrent Markov chain. Moreover, our results are important from a practical standpoint because (1) geometric ergodicity guarantees the existence of central limit theorems that can be used to construct Monte Carlo standard errors and (2) the drift and minorization conditions themselves allow for the calculation of exact upper bounds on the total variation distance to stationarity. The results are illustrated using a simple numerical example.