Gibbs likelihoods for Bayesian tracking

Abstract
Bayesian methods for visual tracking model the likelihood of image measurements conditioned on a tracking hypothesis. Image measurements may, for example, correspond to various filter responses at multiple scales and orientations. Most tracking approaches exploit ad hoc likelihood models while those that exploit more rigorous, learned, models often make unrealistic assumptions about the underlying probabilistic model. Such assumptions cause problems for Bayesian inference when an unsound likelihood is combined with an a priori probability distribution. Errors in modeling the likelihood can lead to brittle tracking results, particularly when using non-parametric inference techniques such as particle filtering. We show how assumptions of conditional independence of filter responses are violated in common tracking scenarios, lead to incorrect likelihood models, and cause problems for Bayesian inference. We address the problem of modeling more principled likelihoods using Gibbs learning. The learned models are compared with naive Bayes methods which assume conditional independence of the filter responses. We show how these Gibbs models can be used as an effective image likelihood, and demonstrate them in the context of particle filter-based human tracking.

This publication has 10 references indexed in Scilit: