Stereo vision and navigation within buildings

Abstract
Soft modeling, stereo vision, motion planning, uncertainty reduction, image processing, and locomotion enable the Mobile Autonomous Robot Stanford to explore a benign indoor environment without human intervention. The modeling system describes rooms in terms of floor, walls, hinged doors and allows for unspecified obstacles. Image processing basically extracts vertical edges along the horizon using an edge appearance model. Stereo vision matches those edges using edge and grey level similarity, constraint propagation and a preference for epipolar ordering. The motion planner tries to move in a way that is likely to increase knowledge about obstacle free space. Results presented are from an autonomous run that included difficult passages such as navigation around a pillar without apriori knowledge.

This publication has 5 references indexed in Scilit: