Optimization for training neural nets
- 1 March 1992
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 3 (2), 232-240
- https://doi.org/10.1109/72.125864
Abstract
Various techniques of optimizing criterion functions to train neural-net classifiers are investigated. These techniques include three standard deterministic techniques (variable metric, conjugate gradient, and steepest descent), and a new stochastic technique. It is found that the stochastic technique is preferable on problems with large training sets and that the convergence rates of the variable metric and conjugate gradient techniques are similar.Keywords
This publication has 14 references indexed in Scilit:
- Phoneme recognition using time-delay neural networksIEEE Transactions on Acoustics, Speech, and Signal Processing, 1989
- A comparison between criterion functions for linear classifiers, with an application to neural netsIEEE Transactions on Systems, Man, and Cybernetics, 1989
- A consideration of invertebrate central pattern generators as computational data basesNeural Networks, 1988
- Accelerating the convergence of the back-propagation methodBiological Cybernetics, 1988
- Experiments on neural net recognition of spoken and written textIEEE Transactions on Acoustics, Speech, and Signal Processing, 1988
- Backpropagation: past and futurePublished by Institute of Electrical and Electronics Engineers (IEEE) ,1988
- Increased rates of convergence through learning rate adaptationNeural Networks, 1988
- Analysis of hidden units in a layered network trained to classify sonar targetsNeural Networks, 1988
- Statistical pattern recognition with neural networks: benchmarking studiesPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1988
- Counterpropagation networksApplied Optics, 1987