Adaptive robotic visual tracking: theory and experiments

Abstract
The use of a vision sensor in the feedback loop is addressed within the controlled active vision framework. Algorithms are proposed for the solution of the robotic (eye-in-hand configuration) visual tracking and servoing problem. Visual tracking is stated as a problem of combining control with computer vision. The sum-of-squared differences optical flow is used to compute the vector of discrete displacements. The displacements are fed to an adaptive controller (self-tuning regulator) that creates commands for a robot control system. The procedure is based on the online estimation of the relative distance of the target from the camera, but only partial knowledge of the relative distance is required, obviating the need for offline calibration. Three different adaptive control schemes have been implemented, both in simulation and in experiments. The computational complexity and the experimental results demonstrate that the proposed algorithms can be implemented in real time.<>

This publication has 20 references indexed in Scilit: