MOSAIC Model for Sensorimotor Learning and Control
Top Cited Papers
- 1 October 2001
- journal article
- Published by MIT Press in Neural Computation
- Vol. 13 (10), 2201-2220
- https://doi.org/10.1162/089976601750541778
Abstract
Humans demonstrate a remarkable ability to generate accurate and appropriate motor behavior under many different and often uncertain environmental conditions. We previously proposed a new modular architecture, the modular selection and identification for control (MOSAIC) model, for motor learning and control based on multiple pairs of forward (predictor) and inverse (controller) models. The architecture simultaneously learns the multiple inverse models necessary for control as well as how to select the set of inverse models appropriate for a given environment. It combines both feedforward and feedback sensorimotor information so that the controllers can be selected both prior to movement and subsequently during movement. This article extends and evaluates the MOSAIC architecture in the following respects. The learning in the architecture was implemented by both the original gradient-descent method and the expectation-maximization (EM) algorithm. Unlike gradient descent, the newly derived EM algorithm is robust to the initial starting conditions and learning parameters. Second, simulations of an object manipulation task prove that the architecture can learn to manipulate multiple objects and switch between them appropriately. Moreover, after learning, the model shows generalization to novel objects whose dynamics lie within the polyhedra of already learned dynamics. Finally, when each of the dynamics is associated with a particular object shape, the model is able to select the appropriate controller before movement execution. When presented with a novel shape-dynamic pairing, inappropriate activation of modules is observed followed by on-line correction.Keywords
This publication has 15 references indexed in Scilit:
- Adaptive control using multiple modelsIEEE Transactions on Automatic Control, 1997
- Adaptation and learning using multiple models, switching, and tuningIEEE Control Systems, 1995
- Hierarchical Mixtures of Experts and the EM AlgorithmNeural Computation, 1994
- Recognition of manipulated objects by motor learning with modular architecture networksNeural Networks, 1993
- On-line estimation of hidden Markov model parameters based on the Kullback-Leibler information measureIEEE Transactions on Signal Processing, 1993
- Forward Models: Supervised Learning with a Distal TeacherCognitive Science, 1992
- Task Decomposition Through Competition in a Modular Connectionist Architecture: The What and Where Vision TasksCognitive Science, 1991
- Adaptive Mixtures of Local ExpertsNeural Computation, 1991
- Mixture autoregressive hidden Markov models for speech signalsIEEE Transactions on Acoustics, Speech, and Signal Processing, 1985
- A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov ChainsThe Annals of Mathematical Statistics, 1970