Convergence Properties of Learning in ART1

Abstract
We consider the ART1 neural network architecture. It is shown that in the fast learning case, an ART1 network that is repeatedly presented with an arbitrary list of binary input patterns, self-stabilizes the recognition code of every size-l pattern in at most l list presentations.