A Subgrouping Strategy that Reduces Complexity and Speeds Up Learning in Recurrent Networks
- 1 December 1989
- journal article
- Published by MIT Press in Neural Computation
- Vol. 1 (4), 552-558
- https://doi.org/10.1162/neco.1989.1.4.552
Abstract
An algorithm, called RTRL, for training fully recurrent neural networks has recently been studied by Williams and Zipser (1989a, b). Whereas RTRL has been shown to have great power and generality, it has the disadvantage of requiring a great deal of computation time. A technique is described here for reducing the amount of computation required by RTRL without changing the connectivity of the networks. This is accomplished by dividing the original network into subnets for the purpose of error propagation while leaving them undivided for activity propagation. An example is given of a 12-unit network that learns to be the finite-state part of a Turing machine and runs 10 times faster using the subgrouping strategy than the original algorithm.Keywords
This publication has 2 references indexed in Scilit:
- A Learning Algorithm for Continually Running Fully Recurrent Neural NetworksNeural Computation, 1989
- Experimental Analysis of the Real-time Recurrent Learning AlgorithmConnection Science, 1989