Stochastic dynamics of supervised learning
- 1 January 1993
- journal article
- Published by IOP Publishing in Journal of Physics A: General Physics
- Vol. 26 (1), 63-71
- https://doi.org/10.1088/0305-4470/26/1/011
Abstract
The stochastic evolution of adiabatic (slow) backpropagation training of a neural network is discussed and a Fokker-Planck equation for the post-training distribution function in the network space is derived. The distribution obtained differs from the one given by Radons et al. (1990). Studying the character of the post-training distribution, the authors find that, except under very special circumstances, the distribution will be non-Gibbsian. The validity of the present approach is tested on a simple backpropagation learning system in one dimension, which can be solved analytically as well. Implications of the Fokker-Planck approach for general situations are examined in the local linear approximation. Surprisingly they find that the post-training distribution is isotropic close to its peak, hence simpler than the corresponding Gibbs distribution.Keywords
This publication has 3 references indexed in Scilit:
- A statistical approach to learning and generalization in layered neural networksProceedings of the IEEE, 1990
- Learning from examples in large neural networksPhysical Review Letters, 1990
- Phase transitions in simple learningJournal of Physics A: General Physics, 1989