Abstract
We give a theoretical and experimental analysis of the generalization error of cross validation using two natural measures of the problem under consideration. The approximation rate measures the accuracy to which the target function can be ideally approximated as a function of the number of parameters, and thus captures the complexity of the target function with respect to the hypothesis model. The estimation rate measures the deviation between the training and generalization errors as a function of the number of parameters, and thus captures the extent to which the hypothesis model suffers from overfitting. Using these two measures, we give a rigorous and general bound on the error of the simplest form of cross validation. The bound clearly shows the dangers of making γ —the fraction of data saved for testing—too large or too small. By optimizing the bound with respect to γ, we then argue that the following qualitative properties of cross-validation behavior should be quite robust to significant changes in the underlying model selection problem: When the target function complexity is small compared to the sample size, the performance of cross validation is relatively insensitive to the choice of γ. The importance of choosing γ optimally increases, and the optimal value for γ decreases, as the target function becomes more complex relative to the sample size. There is nevertheless a single fixed value for γ that works nearly optimally for a wide range of target function complexity.

This publication has 6 references indexed in Scilit: