Abstract
An important optimization problem that arises in control is to minimize $\varphi ( x )$, the largest eigenvalue (in magnitude) of a symmetric matrix function of x. If the matrix function is affine, $\varphi ( x )$ is convex. However, $\varphi ( x )$ is not differentiable, since the eigenvalues are not differentiable at points where they coalesce. In this paper an algorithm that converges to the minimum of $\varphi ( x )$ at a quadratic rate is outlined. Second derivatives are not required to obtain quadratic convergence in cases where the solution is strongly unique. An important feature of the algorithm is the ability to split a multiple eigenvalue, if necessary, to obtain a descent direction. In these respects the new algorithm represents a significant improvement on the first-order methods of Polak and Wardi and of Doyle. The new method has much in common with the recent work of Fletcher on semidefinite constraints and Friedland, Nocedal, and Overton on inverse eigenvalue problems. Numerical examples a...

This publication has 11 references indexed in Scilit: