The Dynamics of Discrete-Time Computation, with Application to Recurrent Neural Networks and Finite State Machine Extraction
- 1 August 1996
- journal article
- research article
- Published by MIT Press in Neural Computation
- Vol. 8 (6), 1135-1178
- https://doi.org/10.1162/neco.1996.8.6.1135
Abstract
Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attractor structure of such systems is given. This knowledge effectively predicts activation space dynamics, which allows one to understand RNN computation dynamics in spite of complexity in activation dynamics. This theory provides a theoretical framework for understanding finite state machine (FSM) extraction techniques and can be used to improve training methods for RNNs performing FSM computations. This provides an example of a successful approach to understanding a general class of complex systems that has not been explicitly designed, e.g., systems that have evolved or learned their internal structure.Keywords
This publication has 6 references indexed in Scilit:
- A Convergence Result for Learning in Recurrent Neural NetworksNeural Computation, 1994
- The induction of dynamical recognizersMachine Learning, 1991
- Inferring statistical complexityPhysical Review Letters, 1989
- Oscillatory Neural NetworksAnnual Review of Physiology, 1985
- On the concept of attractorCommunications in Mathematical Physics, 1985
- On the relevance of abstract algebra to control theoryAutomatica, 1969