Redundancy Reduction as a Strategy for Unsupervised Learning
- 1 March 1993
- journal article
- Published by MIT Press in Neural Computation
- Vol. 5 (2), 289-304
- https://doi.org/10.1162/neco.1993.5.2.289
Abstract
A redundancy reduction strategy, which can be applied in stages,is proposed as a way to learn as efficiently as possible thestatistical properties of an ensemble of sensory messages. Themethod works best for inputs consisting of strongly correlatedgroups, that is features, with weaker statistical dependencebetween different features. This is the case for localized objectsin an image or for words in a text. A local feature measuredetermining how much a single feature reduces the total redundancyis derived which turns out to depend only on the probability of thefeature and of its components, but not on the statisticalproperties of any other features. The locality of this measuremakes it ideal as the basis for a "neural" implementation ofredundancy reduction, and an example of a very simple non-Hebbianalgorithm is given. The effect of noise on learning redundancy isalso discussed.Keywords
This publication has 6 references indexed in Scilit:
- What Does the Retina Know about Natural Scenes?Neural Computation, 1992
- Towards a Theory of Early Visual ProcessingNeural Computation, 1990
- Unsupervised LearningNeural Computation, 1989
- Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of ImagesIEEE Transactions on Pattern Analysis and Machine Intelligence, 1984
- Self-organization of orientation sensitive cells in the striate cortexKybernetik, 1973
- Some informational aspects of visual perception.Psychological Review, 1954