Minimally Connective, Auto-associative, Neural Networks

Abstract
Classic barriers to using auto-associative neural networks to model mammalian memory include the unrealistically high synaptic connectivity of fully connected networks, and the relative paucity of information that has been stored in networks with realistic numbers of synapses per neuron and learning rules amenable to physiological implementation. We describe extremely large, auto-associative networks with low synaptic density. The networks have no direct connections between neurons of the same layer. Rather, the neurons of one layer are 'linked' by connections to neurons of some other layer. Patterns of projections of one layer on to another which form projective planes, or other cognate geometries, confer considerable computational power an the network.

This publication has 6 references indexed in Scilit: