Development of low entropy coding in a recurrent network
- 1 May 1996
- journal article
- Published by Taylor & Francis in Network: Computation in Neural Systems
- Vol. 7 (2), 277-284
- https://doi.org/10.1088/0954-898x/7/2/007
Abstract
In this paper we present an unsupervised neural network which exhibits competition between units via inhibitory feedback. The operation is such as to minimize reconstruction error, both for individual patterns, and over the entire training set. A key difference from networks which perform principal components analysis, or one of its variants, is the ability to converge to non-orthogonal weight values. We discuss the network's operation in relation to the twin goals of maximizing information transfer and minimizing code entropy, and show how the assignment of prior probabilities to network outputs can help to reduce entropy. We present results from two binary coding problems, and from experiments with image coding.Keywords
This publication has 10 references indexed in Scilit:
- Elements of Information TheoryPublished by Wiley ,2001
- Natural image statistics and efficient codingNetwork: Computation in Neural Systems, 1996
- Probable networks and plausible predictions — a review of practical Bayesian methods for supervised neural networksNetwork: Computation in Neural Systems, 1995
- A Multiple Cause Mixture Model for Unsupervised LearningNeural Computation, 1995
- What Is the Goal of Sensory Coding?Neural Computation, 1994
- Unsupervised LearningNeural Computation, 1989
- Optimal unsupervised learning in a single-layer linear feedforward neural networkNeural Networks, 1989
- Complete discrete 2-D Gabor transforms by neural networks for image analysis and compressionIEEE Transactions on Acoustics, Speech, and Signal Processing, 1988
- Self-organization in a perceptual networkComputer, 1988
- Projection PursuitThe Annals of Statistics, 1985