Acquiring hand-action models by attention point analysis

Abstract
This paper describes our current research on learning task level representations by a robot through observation of human demonstrations. We focus on human hand actions and represent such hand actions in symbolic task models. We propose a framework of such models by efficiently in- tegrating multiple observations based on attention points; we then evaluate the produced model by using a human- form robot. We propose a two-step observation mechanism. At the first step, the system roughly observes the entire sequence of the human demonstration, builds a rough task model and also extracts attention points (APs). The attention points indicate the time and the position in the observation se- quence that requires further detailed analysis. At the sec- ond step, the system closely examines the sequence around the APs, and obtains attribute values for the task model, such as what to grasp, which hand to be used, or what is the precise trajectory of the manipulated object. We have implemented this system on a human form robot and demonstrated its effectiveness.

This publication has 4 references indexed in Scilit: