Abstract
A continuous time Markov decision process with uniformly bounded transition rates is shown to be equivalent to a simpler discrete time Markov decision process for both the discounted and average reward criteria on an infinite horizon. This result clarifies some earlier work in this area.