Multiple networks for function learning
- 30 December 2002
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
In the case of learning a mapping, it is proposed to build several possible models instead of one, train them all independently on the same task and take a vote over their responses. These networks will converge to different solutions due to using different models, different parameter set sizes or any other factor related to training. Two training methods are used, i.e., grow and learn (GAL), a memory based method, and backpropagation. Several voting schemes are investigated, and their performances are composed in the case of classification on a real world application (recognition of handwritten numerals) and a two-dimensional didactic case. The weights in voting may be interpreted in two ways: the certainty of a network in its output, and in a Bayesian setting, the plausibility, i.e., the prior probability of the model. In all cases tested, the result of voting is better than the results of all of the networks that participated in the voting process.Keywords
This publication has 7 references indexed in Scilit:
- Neural Networks and the Bias/Variance DilemmaNeural Computation, 1992
- Lowering Variance of Decisions by Using Artificial Neural Network PortfoliosNeural Computation, 1991
- Adaptive Mixtures of Local ExpertsNeural Computation, 1991
- The weighted majority algorithmPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1989
- Stochastic complexity and the mdl principleEconometric Reviews, 1987
- Computational vision and regularization theoryNature, 1985
- The condensed nearest neighbor rule (Corresp.)IEEE Transactions on Information Theory, 1968