Abstract
Using a combination of techniques from visual representations, view synthesis, and visual-motor model estimation, we present a method for animating movements of an articulated agent (e.g. human or robot arm), without the use of any prior models or explicit 3D information. The information needed to generate simulated images can be acquired either on or off fine, by watching the agent doing an arbitrary, possibly unrelated task. We present experimental results synthesizing image sequences of the simulated movement of a human arm and a PUMA 760 robot arm. Control is in either image (camera), motor (joint), or Cartesian world coordinates. We have created a user interface, where a user can input a movement program, and then upon execution, view movies of the (simulated) agent executing the program, along with the instantaneous values of the dynamics variables.

This publication has 5 references indexed in Scilit: