An adaptive fusion architecture for target tracking
- 25 June 2003
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
A vision system is demonstrated that adaptively allocates computational resources over multiple cues to robustly track a target in 3D. The system uses a particle filter to maintain multiple hypotheses of the target location. Bayesian probability theory provides the framework for sensor fusion, and resource scheduling is used to intelli-gently allocate the limited computational resources available across the suite of cues. The system is shown to track a person in 3D space moving in a cluttered environment.Keywords
This publication has 6 references indexed in Scilit:
- Multi-modal tracking of faces for video communicationsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Self-organized integration of adaptive visual cues for face trackingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- A Probabilistic On-Line Mapping Algorithm for Teams of Mobile RobotsThe International Journal of Robotics Research, 2001
- Detecting human faces in color imagesImage and Vision Computing, 1999
- CONDENSATION—Conditional Density Propagation for Visual TrackingInternational Journal of Computer Vision, 1998
- On Information and SufficiencyThe Annals of Mathematical Statistics, 1951