A finite newton method for classification
- 1 January 2002
- journal article
- research article
- Published by Taylor & Francis in Optimization Methods and Software
- Vol. 17 (5), 913-929
- https://doi.org/10.1080/1055678021000028375
Abstract
A fundamental classification problem of data mining and machine learning is that of minimizing a strongly convex, piecewise quadratic function on the n -dimensional real space R n . We show finite termination of a Newton method to the unique global solution starting from any point in R n . If the function is well conditioned, then no stepsize is required from the start and if not, an Armijo stepsize is used. In either case, the algorithm finds the unique global minimum solution in a finite number of iterations.Keywords
This publication has 16 references indexed in Scilit:
- Bound constrained quadratic programming via piecewise quadratic functionsMathematical Programming, 1999
- Successive overrelaxation for support vector machinesIEEE Transactions on Neural Networks, 1999
- A Finite Continuation Algorithm for Bound Constrained Quadratic ProgrammingSIAM Journal on Optimization, 1998
- A New Algorithm for Solving Strictly Convex Quadratic ProgramsSIAM Journal on Optimization, 1997
- Minimization of SC1 functions and the Maratos effectOperations Research Letters, 1995
- A Newton Method for Convex Regression, Data Smoothing, and Quadratic Programming with Bounded ConstraintsSIAM Journal on Optimization, 1993
- A nonsmooth version of Newton's methodMathematical Programming, 1993
- On second-order sufficient optimality conditions for c 1,1-optimization problemsOptimization, 1988
- Refinements of necessary optimality conditions in nondifferentiable programming IIPublished by Springer Nature ,1982
- Minimization of functions having Lipschitz continuous first partial derivativesPacific Journal of Mathematics, 1966