Local and Global Convergence of On-Line Learning
- 14 August 1995
- journal article
- research article
- Published by American Physical Society (APS) in Physical Review Letters
- Vol. 75 (7), 1415-1418
- https://doi.org/10.1103/physrevlett.75.1415
Abstract
We study the performance of a generalized perceptron algorithm for learning realizable dichotomies, with an error-dependent adaptive learning rate. The asymptotic scaling form of the solution to the associated Markov equations is derived, assuming certain smoothness conditions. We show that the system converges to the optimal solution and the generalization error asymptotically obeys a universal inverse power law in the number of examples. The system is capable of escaping from local minima and adapts rapidly to shifts in the target function. The general theory is illustrated for the perceptron and committee machine.Keywords
This publication has 10 references indexed in Scilit:
- Generalization in a two-layer neural networkPhysical Review E, 1993
- On stochastic dynamics of supervised learningJournal of Physics A: General Physics, 1993
- Stochastic dynamics of supervised learningJournal of Physics A: General Physics, 1993
- Learning in neural networks with local minimaPhysical Review A, 1992
- Four Types of Learning CurvesNeural Computation, 1992
- Statistical mechanics of learning from examplesPhysical Review A, 1992
- Learning processes in neural networksPhysical Review A, 1991
- Asymptotic Global Behavior for Stochastic Approximation and Diffusions with Slowly Decreasing Noise Effects: Global Minimization via Monte CarloSIAM Journal on Applied Mathematics, 1987
- Stochastic Approximation Methods for Constrained and Unconstrained SystemsPublished by Springer Nature ,1978
- A Theory of Adaptive Pattern ClassifiersIEEE Transactions on Electronic Computers, 1967