Abstract
It can be shown that by replacing the sigmoid activation function often used in neural networks with an exponential function, a neural network can be formed which computes nonlinear decision boundaries. This technique yields decision surfaces which approach the Bayes optimal under certain conditions. There is a continuous control of the linearity of the decision boundaries, from linear for small training sets to any degree of nonlinearity justified by larger training sets. A four-layer neural network of the type proposed can map any input pattern to any number of classifications. The input variables can be either continuous or binary. Modification of the decision boundaries based on new data can be accomplished in real time simply by defining a set of weights equal to the new training vector. The decision boundaries can be implemented using analog 'neurons', which operate entirely in parallel. The organization proposed takes into account the projected pin limitations of neural-net chips of the near future. By a change in architecture, these same components could be used as associative memories, to compute nonlinear multivariate regression surfaces, or to compute a posteriori probabilities of an event.

This publication has 7 references indexed in Scilit: