Multi-modal extraction of highlights from TV Formula 1 programs

Abstract
As amounts of publicly available video data grow, the need to automatically infer semantics from raw video data becomes significant. In this paper, we focus on the use of dynamic Bayesian networks (DBN) for that purpose, and demonstrate how they can be effectively applied for fusing the evidence obtained from different media information sources. The approach is validated in the particular domain of Formula I race videos. For that specific domain we introduce a robust audiovisual feature extraction scheme and a text recognition and detection method. Based on numerous experiments performed with DBN, we give some recommendations with respect to the modeling of temporal and atemporal dependences within the network. Finally, we present the experimental results for the detection of excited speech and the extraction of highlights, as well as the advantageous query capabilities of our system

This publication has 2 references indexed in Scilit: