Abstract
The recognition of human movements such as walking, running or climbing has been approached previously by tracking a number of feature points and either classifying the trajectories directly or matching them with a high-level model of the movement. A major difficulty with these methods is acquiring and trading the requisite feature points, which are generally specific joints such as knees or angles. This requires previous recognition and/or part segmentation of the actor. We show that the recognition of walking or any repetitive motion activity can be accomplished on the basis of bottom up processing, which does not require the prior identification of specific parts, or classification of the actor. In particular, we demonstrate that repetitive motion is such a strong cue, that the moving actor can be segmented, normalized spatially and temporally, and recognized by matching against a spatiotemporal template of motion features. We have implemented a real-time system that can recognize and classify repetitive motion activities in normal gray-scale image sequences.

This publication has 5 references indexed in Scilit: