Finding Minimum Entropy Codes
- 1 September 1989
- journal article
- Published by MIT Press in Neural Computation
- Vol. 1 (3), 412-423
- https://doi.org/10.1162/neco.1989.1.3.412
Abstract
To determine whether a particular sensory event is a reliable predictor of reward or punishment it is necessary to know the prior probability of that event. If the variables of a sensory representation normally occur independently of each other, then it is possible to derive the prior probability of any logical function of the variables from the prior probabilities of the individual variables, without any additional knowledge; hence such a representation enormously enlarges the scope of definable events that can be searched for reliable predictors. Finding a Minimum Entropy Code is a possible method of forming such a representation, and methods for doing this are explored in this paper. The main results are (1) to show how to find such a code when the probabilities of the input states form a geometric progression, as is shown to be nearly true for keyboard characters in normal text; (2) to show how a Minimum Entropy Code can be approximated by repeatedly recoding pairs, triples, etc. of an original 7-bit ...Keywords
This publication has 3 references indexed in Scilit:
- Unsupervised LearningNeural Computation, 1989
- Pattern recognition as a quest for minimum entropyPattern Recognition, 1981
- Some informational aspects of visual perception.Psychological Review, 1954