Vision-based self-localization of a mobile robot using a virtual environment
- 1 January 1999
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 4 (10504729), 2911-2916
- https://doi.org/10.1109/robot.1999.774039
Abstract
This paper presents a method for position estimation of mobile robots based on the comparison of real camera snapshots taken by an on-board camera and images taken by a virtual camera in a virtual environment. We propose a technique for texturing planar walls of a 3D model of the operating environment and make use of this model for improving operator situation awareness as well as robot self-localization. By applying texture created from camera snapshots a more realistic impression of the environment can be obtained, so that the virtual environment could even be used for inspection tasks. Furthermore, the texture provides additional structure which is especially useful in hallways that contain only few hints for robot navigation.Keywords
This publication has 5 references indexed in Scilit:
- Detection and handling of moving objectsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Flexible 3D acquisition with a monocular cameraPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- A mobile robot control centre for mission and data managementPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1998
- Cepstral filtering on a columnar image architecture: a fast algorithm for binocular stereo segmentationIEEE Transactions on Pattern Analysis and Machine Intelligence, 1989
- Use of the Hough transformation to detect lines and curves in picturesCommunications of the ACM, 1972