Abstract
Hardware to implement feedforward neural networks has been developed for the evaluation of learning algorithms and prototyping of applications. To allow the construction of networks with arbitrary architectures, CMOS VLSI building-block components (e.g. arrays of neurons and synapses) have been designed. These can be cascaded to form networks with hundreds of neurons per layer. A 64-channel multiplexer input neuron chip serves to buffer stored charges for injection into the first synaptic layer. A 32*32 synapse chip design uses multiplier circuits to generate a conductance from stored analog charges representing weights. A 32-channel variable-gain neuron chip applies an adjustable-gain sigmoidal activation function to the sum of currents from the previous synaptic layer. Learning is performed by a host computer that can download weights and inputs onto the feedforward hardware, and read resultant network outputs. Weights and input values are stored as charges on on-chip capacitors; these are serially and invisibly refreshed by off-chip circuits that convert values stored in digital memory into analog signals.

This publication has 9 references indexed in Scilit: