The Solution and Estimation of Discrete Choice Dynamic Programming Models by Simulation and Interpolation: Monte Carlo Evidence

Abstract
Over the past decade, a substantial literature on methods for the estimation of discrete choice dynamic programming (DDP) models of behavior has developed. However, the implementation of these methods can impose major computational burdens because solving for agents' decision rules often involves high dimensional integrations that must be performed at each point in the state space. In this paper we develop an approximate solution method that consists of: 1) using Monte Carlo integration to stimulate the required multiple integrals at a subset of the state points, and 2) interpolating the non-simulated values using a regression function. The overall performance of this approximation method appears to be excellent.