Visual space task specification, planning and control

Abstract
Robot manipulators, some thirty years after their commercial introduction, have found widespread application in structured industrial environments, performing, for instance, repetitive tasks in an assembly line. Successful application in unstructured environments however has proven much harder. Yet there are many such tasks where robots would be useful. We present a promising approach to visual (and more general sensory) robot control, that does not require modeling of robot transfer functions or the use of absolute world coordinate systems, and thus is suitable for use in unstructured environments. Our approach codes actions and tasks in terms of desired general perceptions rather than motor sequences. We argue that our vision space approach is particularly suited for easy teaching/programming of a robot. For instance a task can be taught by supplying an image sequence illustrating it. The resulting robot behavior is robust to changes in the environment, dynamically adjusting the motor control rules in response to environmental variation.

This publication has 10 references indexed in Scilit: