Let {Xi} be a stationary Markov sequence having a transition probability density function f(y | x) giving the pdf of Xi +1 | (Xi = x). In this study, nonparametric density and regression techniques are employed to infer f(y | x) and m(x) = E[Xi + 1 | Xi = x]. It is seen that under certain regularity and Markovian assumptions, the asymptotic convergence rate of the nonparametric estimator mn(x) to the predictor m(x) is the same as it would have been had the Xi's been independently and identically distributed, and this rate is optimal in a certain sense. Consistency can be maintained after differentiability and even the Markovian assumptions are abandoned. Computational and modeling ramifications are explored. I claim that my methodology offers an interesting alternative to the popular ARMA approach.