On self-organizing algorithms and networks for class-separability features

Abstract
We describe self-organizing learning algorithms and associated neural networks to extract features that are effective for preserving class separability. As a first step, an adaptive algorithm for the computation of Q/sup -1/2/ (where Q is the correlation or covariance matrix of a random vector sequence) is described. Convergence of this algorithm with probability one is proven by using stochastic approximation theory, and a single-layer linear network architecture for this algorithm is described, which we call the Q/sup -1/2/ network. Using this network, we describe feature extraction architectures for: 1) unimodal and multicluster Gaussian data in the multiclass case; 2) multivariate linear discriminant analysis (LDA) in the multiclass case; and 3) Bhattacharyya distance measure for the two-class case. The LDA and Bhattacharyya distance features are extracted by concatenating the Q/sup -1/2/ network with a principal component analysis network, and the two-layer network is proven to converge with probability one. Every network discussed in the study considers a flow or sequence of inputs for training. Numerical studies on the performance of the networks for multiclass random data are presented.