Learning processes in neural networks

Abstract
We study the learning dynamics of neural networks from a general point of view. The environment from which the network learns is defined as a set of input stimuli. At discrete points in time, one of these stimuli is presented and an incremental learning step takes place. If the time between learning steps is drawn from a Poisson distribution, the dynamics of an ensemble of learning processes is described by a continuous-time master equation. A learning algorithm that enables a neural network to adapt to a changing environment must have a nonzero learning parameter. This constant adaptability, however, goes at cost of fluctuations in the plasticities, such as synapses and thresholds. The ensemble description allows us to study the asymptotic behavior of the plasticities for a large class of neural networks. For small learning parameters, we derive an expression for the size of the fluctuations in an unchanging environment. In a changing environment, there is a trade-off between adaptability and accuracy (i.e., size of the fluctuations). We use the networks of Grossberg [J. Stat. Phys. 48, 105 (1969)] and Oja [J. Math. Biol. 15, 267 (1982)] as simple examples to analyze and simulate the performance of neural networks in a changing environment. In some cases an optimal learning parameter can be calculated.

This publication has 8 references indexed in Scilit: