Formulations of Support Vector Machines: A Note from an Optimization Point of View
- 1 February 2001
- journal article
- Published by MIT Press in Neural Computation
- Vol. 13 (2), 307-317
- https://doi.org/10.1162/089976601300014547
Abstract
In this article, we discuss issues about formulations of support vector machines (SVM) from an optimization point of view. First, SVMs map training data into a higher- (maybe infinite-) dimensional space. Currently primal and dual formulations of SVM are derived in the finite dimensional space and readily extend to the infinite-dimensional space. We rigorously discuss the primal-dual relation in the infinite-dimensional spaces. Second, SVM formulations contain penalty terms, which are different from unconstrained penalty functions in optimization. Traditionally unconstrained penalty functions approximate a constrained problem as the penalty parameter increases. We are interested in similar properties for SVM formulations. For two of the most popular SVM formulations, we show that one enjoys properties of exact penalty functions, but the other is only like traditional penalty functions, which converge when the penalty parameter goes to infinity.Keywords
This publication has 3 references indexed in Scilit:
- New Support Vector AlgorithmsNeural Computation, 2000
- Support-vector networksMachine Learning, 1995
- Robust linear programming discrimination of two linearly inseparable setsOptimization Methods and Software, 1992