Abstract
A state variable formulation of the remote manipulation problem is presented, applicable to human-supervised or autonomous computer-manipulators. A discrete state vector, containing position variables for the manipulator and relevant objects, spans a quantized state space comprising many static configurations of objects and hand. A manipulation task is a desired new state. State transitions are assigned costs and are accomplished by commands: hand motions plus grasp, release, push, twist, etc. In control theory terms the problem is to find the cheapest control history (if any) from present to desired state. A method similar to dynamic programming is used to determine the optimal history. The system is capable of obstacle avoidance, grasp rendezvous, incorporation of new sensor data, remembering results of previous tasks, and so on.

This publication has 9 references indexed in Scilit: