General potential surfaces and neural networks

Abstract
Investigation of Hopfield’s model of associative-memory implementation by a neural network led to an associative-memory model based on a generalized potential surface. In this model, there are no spurious memories, and any set of desired points can be stored with unlimited capacity (in the continuous-time real-space version of the model). There are no limit cycles in this system, and the size of all basins of attraction can reach up to half the distance between stored points by proper choice of the design parameters. A discrete-time version with its state-space being the unit hypercube is also derived, and admits a worst-case capacity (under any fixed desired size of basins of attractions) which grows exponentially with the number of neurons at a rate that is asymptotically optimal in the information theory sense. The computational complexity of this model is similar to that of the Hopfield memory. The results are derived under an axiomatic approach which determines the desired properties and shows that the above-mentioned model is the only one to achieve them.

This publication has 12 references indexed in Scilit: