Variance algorithm for minimization

Abstract
An algorithm is presented for minimizing real valued differentiable functions on an N-dimensional manifold. In each iteration, the value of the function and its gradient are computed just once, and used to form new estimates for the location of the minimum and the variance matrix (i.e. the inverse of the matrix of second derivatives). A proof is given for convergence within N-iterations to the exact minimum and variance matrix for quadratic functions. Whether or not the function is quadratic, each iteration begins at the point where the function has the least of all past computed values.