An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories
- 1 December 1990
- journal article
- Published by MIT Press in Neural Computation
- Vol. 2 (4), 490-501
- https://doi.org/10.1162/neco.1990.2.4.490
Abstract
A novel variant of a familiar recurrent network learning algorithm is described. This algorithm is capable of shaping the behavior of an arbitrary recurrent network as it runs, and it is specifically designed to execute efficiently on serial machines. 1 Introduction Artificial neural networks having feedback connections can implement a wide variety of dynamical systems. The problem of training such a network is the problem of finding a particular dynamical system from among a parameterized...Keywords
This publication has 6 references indexed in Scilit:
- A Subgrouping Strategy that Reduces Complexity and Speeds Up Learning in Recurrent NetworksNeural Computation, 1989
- Finite State Automata and Simple Recurrent NetworksNeural Computation, 1989
- A Learning Algorithm for Continually Running Fully Recurrent Neural NetworksNeural Computation, 1989
- Learning State Space Trajectories in Recurrent Neural NetworksNeural Computation, 1989
- Experimental Analysis of the Real-time Recurrent Learning AlgorithmConnection Science, 1989
- Generalization of back-propagation to recurrent neural networksPhysical Review Letters, 1987