Principal component analysis by gradient descent on a constrained linear Hebbian cell
- 1 January 1989
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 373-380 vol.1
- https://doi.org/10.1109/ijcnn.1989.118611
Abstract
The behavior of a linear computing unit is analyzed during learning by gradient descent of a cost function equal to the sum of a variance maximization and a weight normalization term. The landscape of this cost function is shown to be composed of one local maximum, a set of saddle points, and one global minimum aligned with the principal components of the input patterns. It is possible to describe the cost landscape in terms of the hyperspheres, hypercrests, and hypervalleys associated with each of these principal components. Using this description, it is possible to show that the learning trajectory will converge to the global minimum of the landscape under certain conditions of the starting weights and learning rate of the descent procedure. Furthermore, it is possible to provide a precise description of the learning trajectory in this cost landscape. Extensions and implications of the algorithm are discussed by using networks of such cells.Keywords
This publication has 5 references indexed in Scilit:
- Neural networks and principal component analysis: Learning from examples without local minimaNeural Networks, 1989
- Self-organization in a perceptual networkComputer, 1988
- From basic network principles to neural architecture: emergence of spatial-opponent cells.Proceedings of the National Academy of Sciences, 1986
- Parallel Distributed ProcessingPublished by MIT Press ,1986
- Simplified neuron model as a principal component analyzerJournal of Mathematical Biology, 1982