The Role of Weight Normalization in Competitive Learning
- 1 March 1994
- journal article
- Published by MIT Press in Neural Computation
- Vol. 6 (2), 255-269
- https://doi.org/10.1162/neco.1994.6.2.255
Abstract
The effect of different kinds of weight normalization on the outcome of a simple competitive learning rule is analyzed. It is shown that there are important differences in the representation formed depending on whether the constraint is enforced by dividing each weight by the same amount (“divisive enforcement”) or subtracting a fixed amount from each weight (“subtractive enforcement”). For the divisive cases weight vectors spread out over the space so as to evenly represent “typical” inputs, whereas for the subtractive cases the weight vectors tend to the axes of the space, so as to represent “extreme” inputs. The consequences of these differences are examined.Keywords
This publication has 3 references indexed in Scilit:
- The Role of Constraints in Hebbian LearningNeural Computation, 1994
- UNSUPERVISED LEARNING PROCEDURES FOR NEURAL NETWORKSInternational Journal of Neural Systems, 1991
- Self-organization of orientation sensitive cells in the striate cortexKybernetik, 1973