Abstract
Reports on the characteristics of a net which incorporates higher-order effects in supervised learning in ways different from those previously proposed. These higher-order effects are introduced through nonlinear functional transforms via links rather than at nodes. Specific instances include transforming the input vector into higher-order tensors. Learning-rate increases, for several examples, are described. Network architecture simplifications are also described.

This publication has 2 references indexed in Scilit: