A Practical Bayesian Framework for Backpropagation Networks
- 1 May 1992
- journal article
- Published by MIT Press in Neural Computation
- Vol. 4 (3), 448-472
- https://doi.org/10.1162/neco.1992.4.3.448
Abstract
A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian "evidence" automatically embodies "Occam's razor," penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.Keywords
This publication has 5 references indexed in Scilit:
- Bayesian InterpolationNeural Computation, 1992
- Generalizing Smoothness Constraints from Discrete SamplesNeural Computation, 1990
- The Vapnik-Chervonenkis Dimension: Information versus Complexity in LearningNeural Computation, 1989
- Recurrent Backpropagation and the Dynamical Approach to Adaptive Neural ComputationNeural Computation, 1989
- Learning representations by back-propagating errorsNature, 1986