A Convergence Result for Learning in Recurrent Neural Networks
- 1 May 1994
- journal article
- Published by MIT Press in Neural Computation
- Vol. 6 (3), 420-440
- https://doi.org/10.1162/neco.1994.6.3.420
Abstract
We give a rigorous analysis of the convergence properties of a backpropagation algorithm for recurrent networks containing either output or hidden layer recurrence. The conditions permit data generated by stochastic processes with considerable dependence. Restrictions are offered that may help assure convergence of the network parameters to a local optimum, as some simulations illustrate.Keywords
This publication has 7 references indexed in Scilit:
- Constrained supervised learningJournal of Mathematical Psychology, 1992
- Finding structure in timeCognitive Science, 1990
- Some Asymptotic Results for Learning in Single Hidden-Layer Feedforward Network ModelsJournal of the American Statistical Association, 1989
- Generalization of back-propagation to recurrent neural networksPhysical Review Letters, 1987
- Non-strong mixing autoregressive processesJournal of Applied Probability, 1984
- A Maximal Inequality and Dependent Strong LawsThe Annals of Probability, 1975
- A Stochastic Approximation MethodThe Annals of Mathematical Statistics, 1951