An appearance-based representation of action

Abstract
A new view-based approach to the representation of action is presented. Our underlying representations are view-based descriptions of the coarse image motion associated with viewing given actions from particular directions. Using these descriptions, we propose an appearance-based action-recognition strategy comprised of two stages: 1) a motion energy image (MEI) is computed that grossly describes the spatial distribution of motion energy for a given view of a given action, and the input MEI is matched against stored models which span the range of views of known actions; 2) any models that plausibly match the input are tested for a coarse, categorical agreement between a stored motion model of the action and a parametrization of the input motion. Using a "sitting" action as an example, and using a manually placed stick model, we develop a representation and verification technique that collapses the temporal variations of the motion parameters into a single, low-order vector.

This publication has 14 references indexed in Scilit: