A Bayesian Analysis of Self-Organizing Maps
- 1 September 1994
- journal article
- Published by MIT Press in Neural Computation
- Vol. 6 (5), 767-794
- https://doi.org/10.1162/neco.1994.6.5.767
Abstract
In this paper Bayesian methods are used to analyze some of the properties of a special type of Markov chain. The forward transitions through the chain are followed by inverse transitions (using Bayes' theorem) backward through a copy of the same chain; this will be called a folded Markov chain. If an appropriately defined Euclidean error (between the original input and its “reconstruction” via Bayes' theorem) is minimized with respect to the choice of Markov chain transition probabilities, then the familiar theories of both vector quantizers and self-organizing maps emerge. This approach is also used to derive the theory of self-supervision, in which the higher layers of a multilayer network supervise the lower layers, even though overall there is no external teacher.Keywords
This publication has 5 references indexed in Scilit:
- Asymptotic level density for a class of vector quantization processesIEEE Transactions on Neural Networks, 1991
- On the performance and complexity of channel-optimized vector quantizersIEEE Transactions on Information Theory, 1991
- A study of vector quantization for noisy channelsIEEE Transactions on Information Theory, 1990
- Derivation of a class of training algorithmsIEEE Transactions on Neural Networks, 1990
- Image compression using a multilayer neural networkPattern Recognition Letters, 1989