A Penalty-Function Approach for Pruning Feedforward Neural Networks
- 1 January 1997
- journal article
- Published by MIT Press in Neural Computation
- Vol. 9 (1), 185-204
- https://doi.org/10.1162/neco.1997.9.1.185
Abstract
This article proposes the use of a penalty function for pruning feedforward neural network by weight elimination. The penalty function proposed consists of two terms. The first term is to discourage the use of unnecessary connections, and the second term is to prevent the weights of the connections from taking excessively large values. Simple criteria for eliminating weights from the network are also given. The effectiveness of this penalty function is tested on three well-known problems: the contiguity problem, the parity problems, and the monks problems. The resulting pruned networks obtained for many of these problems have fewer connections than previously reported in the literature.Keywords
This publication has 11 references indexed in Scilit:
- Extracting Rules from Neural Networks by Pruning and Hidden-Unit SplittingNeural Computation, 1997
- A QUANTITATIVE STUDY OF PRUNING BY OPTIMAL BRAIN DAMAGEInternational Journal of Neural Systems, 1993
- A NODE PRUNING ALGORITHM FOR BACKPROPAGATION NETWORKSInternational Journal of Neural Systems, 1992
- IMPROVING GENERALIZATION OF NEURAL NETWORKS THROUGH PRUNINGInternational Journal of Neural Systems, 1991
- The Upstart Algorithm: A Method for Constructing and Training Feedforward Neural NetworksNeural Computation, 1990
- A simple procedure for pruning back-propagation trained neural networksIEEE Transactions on Neural Networks, 1990
- Self-organizing network for optimum supervised learningIEEE Transactions on Neural Networks, 1990
- Connectionist learning proceduresArtificial Intelligence, 1989
- Learning in feedforward layered networks: the tiling algorithmJournal of Physics A: General Physics, 1989
- Dynamic Node Creation in Backpropagation NetworksConnection Science, 1989