On learning simple neural concepts: from halfspace intersections to neural decision lists

Abstract
In this paper, we take a close look at the problem of learning simple neural concepts under the uniform distribution of examples. By simple neural concepts the authors mean concepts that can be represented as simple combinations of perceptrons (halfspaces). One such class of concepts is the class of halfspace intersections. By formalizing the problem of learning halfspace intersections as a set-covering problem, we are led to consider the following sub-problem: given a set of nonlinearly separable examples, find the largest linearly separable subset of it. The authors give an approximation algorithm for this NP-hard sub-problem. Simulations, on both linearly and nonlinearly separable functions, show that this approximation algorithm works well under the uniform distribution, outperforming the pocket algorithm used by many constructive neural algorithms. Based on this approximation, we present a greedy method for learning halfspace intersections. We also present extensive numerical results that strongly suggest that this greedy method learns halfspace intersections under the uniform distribution of examples. Finally, we introduce a new class of simple, yet very rich, neural concepts that they call neural decision lists. They show how the greedy method can be generalized to handle this class of concepts. Both greedy methods for halfspace intersections and neural decision lists were tried on real-world data with very encouraging results. This shows that these concepts are not only important from the theoretical point of view, but also in practice.

This publication has 23 references indexed in Scilit: