Abstract
The basic convergence properties of evolutionary optimization algorithms are investigated. Analysis indicates that the methods studied will asymptotically converge to global optima. The results also indicate that genetic algorithms may prematurely stagnate at solutions that may not even be locally optimal. Function optimization experiments are conducted that illustrate the mathematical properties. Evolutionary programming is seen to outperform genetic algorithms in searching two response surfaces that do not possess local optima. The results are statistically significant.