Velocity adaptation of space-time interest points

Abstract
The notion of local features in space-time has recently been proposed to capture and describe local events in video. When computing space-time descriptors, however, the result may strongly depend on the relative motion between the object and the camera. To compensate for this variation, we present a method that automatically adapts the features to the local velocity of the image pattern and, hence, results in a video representation that is stable with respect to different amounts of camera motion. Experimentally we show that the use of velocity adaptation substantially increases the repeatability of interest points as well as the stability of their associated descriptors. Moreover, for an application to human action recognition we demonstrate how velocity-adapted features enable recognition of human actions in situations with unknown camera motion and complex, non-stationary backgrounds.

This publication has 7 references indexed in Scilit: