The Transition to Perfect Generalization in Perceptrons
- 1 September 1991
- journal article
- Published by MIT Press in Neural Computation
- Vol. 3 (3), 386-401
- https://doi.org/10.1162/neco.1991.3.3.386
Abstract
Several recent papers (Gardner and Derrida 1989; Györgyi 1990; Sompolinsky et al. 1990) have found, using methods of statistical physics, that a transition to perfect generalization occurs in training a simple perceptron whose weights can only take values ±1. We give a rigorous proof of such a phenomena. That is, we show, for α = 2.0821, that if at least αn examples are drawn from the uniform distribution on {+1, −1}n and classified according to a target perceptron wt ∈ {+1, −1}n as positive or negative according to whether wt·x is nonnegative or negative, then the probability is 2−(√n) that there is any other such perceptron consistent with the examples. Numerical results indicate further that perfect generalization holds for α as low as 1.5.Keywords
This publication has 4 references indexed in Scilit:
- Learning from examples in large neural networksPhysical Review Letters, 1990
- First-order transition to perfect generalization in a neural network with binary synapsesPhysical Review A, 1990
- Three unfinished works on the optimal storage capacity of networksJournal of Physics A: General Physics, 1989
- What Size Net Gives Valid Generalization?Neural Computation, 1989