Abstract
Coordinated orienting movements can be accurately performed without direct sensory control. Ocular saccades, for instance, have been shown to be reprogrammed after target disappearance when an intervening eye movement is electrically triggered before the saccade onset. Saccadic eye movements can also be executed toward memorized targets, even when the subject has been passively moved in darkness. Two hypotheses have been proposed to account for this goal-invariance property: either (i) the goal is reconstructed and memorized in the stable frame of reference linked to the environment ("allocentric, coordinates") or (ii) the goal is selected and memorized in the sensors-related maps ("egocentric coordinates") and is continuously updated by efferent copies of the motor commands. In this paper, we shall describe a formal neural network based on this second hypothesis. The results of the simulation show that target position can be memorized and accurately updated in a topologically ordered map, using a velocity-signal feedback. Moreover, this network has been submitted to a simple learning procedure by using the intermittent visual recurring afferent signal as the teaching signal. A similar mechanism could be involved in control of limb movement.