Learning in multilayered networks used as autoassociators

Abstract
Gradient descent learning algorithms may get stuck in local minima, thus making the learning suboptimal. In this paper, we focus attention on multilayered networks used as autoassociators and show some relationships with classical linear autoassociators. In addition, using the theoretical framework of our previous research, we derive a condition which is met at the end of the learning process and show that this condition has a very intriguing geometrical meaning in the pattern space

This publication has 4 references indexed in Scilit: