Storing patterns in a spin-glass model of neural networks nears saturation

Abstract
A Gaussian approximation for the synaptic noise and the n to 0 replica method are used to study spin-glass models of neural networks near saturation, i.e. when the number p of stored patterns increases with the size of the network N and p= alpha N. Qualitative features are predicted surprisingly well. For instance, at T=0 the linear Hopfield network provides effective associative memory with errors not exceeding 0.05% for alpha c approximately=0.15. In a network with clipped synapses, the number of patterns which can be stored with some given error tolerance is reduced by a factor of 2/pi as compared with the linear Hopfield model. A simple learning within bounds algorithm is found to continuously interpolate between the linear Hopfield model and the network with clipped synapses.