Abstract
Neural networks must be constructed and validated with strong empirical dependence, which is difficult under conditions of sparse data. The paper examines the most common methods of neural network validation along with several general validation methods from the statistical resampling literature, as applied to function approximation networks with small sample sizes. It is shown that an increase in computation, necessary for the statistical resampling methods, produces networks that perform better than those constructed in the traditional manner. The statistical resampling methods also result in lower variance of validation, however some of the methods are biased in estimating network error.