Learning Multiple Goal-Directed Actions Through Self-Organization of a Dynamic Neural Network Model: A Humanoid Robot Experiment

Abstract
We introduce a model that accounts for cognitive mechanisms of learning and generating multiple goal-directed actions. The model employs the novel idea of the so-called “sensory forward model,” which is assumed to function in inferior parietal cortex for the generation of skilled behaviors in humans and monkeys. A set of different goal-directed actions can be generated by the sensory forward model by utilizing the initial sensitivity characteristics of its acquired forward dynamics. The analyses on our robotics experiments show qualitatively how generalization in learning can be achieved for situational variances, and how the top-down intention toward a specific goal state can reconcile with the bottom-up sensation from reality.