Vision-based self-localization of a mobile robot using a virtual environment

Abstract
This paper presents a method for position estimation of mobile robots based on the comparison of real camera snapshots taken by an on-board camera and images taken by a virtual camera in a virtual environment. We propose a technique for texturing planar walls of a 3D model of the operating environment and make use of this model for improving operator situation awareness as well as robot self-localization. By applying texture created from camera snapshots a more realistic impression of the environment can be obtained, so that the virtual environment could even be used for inspection tasks. Furthermore, the texture provides additional structure which is especially useful in hallways that contain only few hints for robot navigation.

This publication has 5 references indexed in Scilit: