Learning human activities and object affordances from RGB-D videos
Top Cited Papers
Open Access
- 11 July 2013
- journal article
- research article
- Published by SAGE Publications in The International Journal of Robotics Research
- Vol. 32 (8), 951-970
- https://doi.org/10.1177/0278364913478446
Abstract
Understanding human activities and object affordances are two very important skills, especially for personal robots which operate in human environments. In this work, we consider the problem of extracting a descriptive labeling of the sequence of sub-activities being performed by a human, and more importantly, of their interactions with the objects in the form of associated affordances. Given a RGB-D video, we jointly model the human activities and object affordances as a Markov random field where the nodes represent objects and sub-activities, and the edges represent the relationships between object affordances, their relations with sub-activities, and their evolution over time. We formulate the learning problem using a structural support vector machine (SSVM) approach, where labelings over various alternate temporal segmentations are considered as latent variables. We tested our method on a challenging dataset comprising 120 activity videos collected from 4 subjects, and obtained an accuracy of 79.4% for affordance, 63.4% for sub-activity and 75.0% for high-level activity labeling. We then demonstrate the use of such descriptive labeling in performing assistive tasks by a PR2 robot.Keywords
All Related Versions
This publication has 32 references indexed in Scilit:
- Robust 3D visual tracking using particle filtering on the special Euclidean group: A combined approach of keypoint and edge featuresThe International Journal of Robotics Research, 2012
- Learning the semantics of object–action relations by observationThe International Journal of Robotics Research, 2011
- Learning spatial relationships between objectsThe International Journal of Robotics Research, 2011
- Manipulator and object tracking for in-hand 3D object modelingThe International Journal of Robotics Research, 2011
- Human activity analysisACM Computing Surveys, 2011
- The MOPED framework: Object recognition and pose estimation for manipulationThe International Journal of Robotics Research, 2011
- Learning Visual Object Categories for Robot Affordance PredictionThe International Journal of Robotics Research, 2009
- Cutting-plane training of structural SVMsMachine Learning, 2009
- Robotic Grasping of Novel Objects using VisionThe International Journal of Robotics Research, 2008
- Visual Learning by Imitation With Motor RepresentationsIEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2005