Extraction, Insertion and Refinement of Symbolic Rules in Dynamically Driven Recurrent Neural Networks
- 1 January 1993
- journal article
- research article
- Published by Taylor & Francis in Connection Science
- Vol. 5 (3-4), 307-337
- https://doi.org/10.1080/09540099308915703
Abstract
Recurrent neural networks readily process, learn and generate temporal sequences. In addition, they have been shown to have impressive computational power. Recurrent neural networks can be trained with symbolic string examples encoded as temporal sequences to behave like sequential finite slate recognizers. We discuss methods for extracting, inserting and refining symbolic grammatical rules for recurrent networks. This paper discusses various issues: how rules are inserted into recurrent networks, how they affect training and generalization, and how those rules can be checked and corrected. The capability of exchanging information between a symbolic representation (grammatical rules)and a connectionist representation (trained weights) has interesting implications. After partially known rules are inserted, recurrent networks can be trained to preserve inserted rules that were correct and to correct through training inserted rules that were ‘incorrec’—rules inconsistent with the training data.Keywords
This publication has 35 references indexed in Scilit:
- First-order versus second-order single-layer recurrent neural networksIEEE Transactions on Neural Networks, 1994
- Neural Networks and the Bias/Variance DilemmaNeural Computation, 1992
- Including Hints in Training Neural NetsNeural Computation, 1991
- Discovering the Structure of a Reactive Environment by ExplorationNeural Computation, 1990
- Connectionist Language UsersConnection Science, 1990
- Syntactic Neural NetworksConnection Science, 1990
- Finite State Automata and Simple Recurrent NetworksNeural Computation, 1989
- A Learning Algorithm for Continually Running Fully Recurrent Neural NetworksNeural Computation, 1989
- Higher order associative memories and their optical implementationsNeural Networks, 1988
- Number of stable points for spin-glasses and neural networks of higher ordersPhysical Review Letters, 1987