Incorporating prior information in machine learning by creating virtual examples
- 1 November 1998
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in Proceedings of the IEEE
- Vol. 86 (11), 2196-2209
- https://doi.org/10.1109/5.726787
Abstract
One of the key problems in supervised learning is the insufficient size of the training set. The natural way for an intelligent learner to counter this problem and successfully generalize is to exploit prior information that may be available about the domain or that can be learned from prototypical examples. We discuss the notion of using prior knowledge by creating virtual examples and thereby expanding the effective training-set size. We show that in some contexts this idea is mathematically equivalent to incorporating the prior knowledge as a regularizer, suggesting that the strategy is well motivated. The process of creating virtual examples in real-world pattern recognition tasks is highly nontrivial. We provide demonstrative examples from object recognition and speech recognition to illustrate the idea.Keywords
This publication has 28 references indexed in Scilit:
- Image Representations for Visual LearningScience, 1996
- On the Relationship between Generalization Error, Hypothesis Complexity, and Sample Complexity for Radial Basis FunctionsNeural Computation, 1996
- From Data Distributions to Regularization in Invariant LearningNeural Computation, 1995
- HintsNeural Computation, 1995
- The importance of symmetry and virtual views in three-dimensional object recognitionCurrent Biology, 1994
- Hints and the VC DimensionNeural Computation, 1993
- Fast Learning in Networks of Locally-Tuned Processing UnitsNeural Computation, 1989
- Synthesizing a Color Algorithm from ExamplesScience, 1988
- Interpolation of scattered data: Distance matrices and conditionally positive definite functionsConstructive Approximation, 1986
- Control Methods Used in a Study of the VowelsThe Journal of the Acoustical Society of America, 1952