Acquiring hand-action models by attention point analysis
- 13 November 2002
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
This paper describes our current research on learning task level representations by a robot through observation of human demonstrations. We focus on human hand actions and represent such hand actions in symbolic task models. We propose a framework of such models by efficiently in- tegrating multiple observations based on attention points; we then evaluate the produced model by using a human- form robot. We propose a two-step observation mechanism. At the first step, the system roughly observes the entire sequence of the human demonstration, builds a rough task model and also extracts attention points (APs). The attention points indicate the time and the position in the observation se- quence that requires further detailed analysis. At the sec- ond step, the system closely examines the sequence around the APs, and obtains attribute values for the task model, such as what to grasp, which hand to be used, or what is the precise trajectory of the manipulated object. We have implemented this system on a human form robot and demonstrated its effectiveness.Keywords
This publication has 4 references indexed in Scilit:
- Task-model based human robot cooperation using visionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- Toward an assembly plan from observation. I. Task recognition with polyhedral objectsIEEE Transactions on Robotics and Automation, 1994
- Learning by watching: extracting reusable task knowledge from visual observation of human performanceIEEE Transactions on Robotics and Automation, 1994
- On grasp choice, grasp models, and the design of hands for manufacturing tasksIEEE Transactions on Robotics and Automation, 1989