Abstract
In certain stochastic-approximation applications, sufficient conditions for mean-square and probability-one convergence are satisfied within some unknown bounded convex set, referred to as a convergence region. Globally, the conditions are not satisfied. Important examples are found in decision-directed procedures. If a convergence region were known, a reflecting barrier at the boundary would solve the problem. Then the estimate would converge in mean square and with probability one. Since a convergence region may not be known in practice, the possibility of nonconvergence must be accepted. LetAbe the event where the estimation sequence never crosses a particular convergence-region boundary. The sequence of estimates conditioned onAconverges in mean square and with probability one, because the sequence of estimates is the same as if there were a reflecting barrier at the boundary. Therefore, the unconditional probability of convergence exceeds the probability of the eventA. Starting from this principle, a lower bound on the convergence probability is derived in this paper. The results can also be used when the convergence conditions are satisfied globally to bound the maximum-error probability distribution. Specific examples are presented.