Abstract
The authors describe a vision-guided mobile robot navigation system, called NEURO-NAV, that is human-like in two senses. The robot can function with non-metrical models of the environment in much the same manner as humans. It does not need a geometric model of the environment. It is sufficient if the environment is modeled by the order of appearance of various landmarks and by adjacency relationships. Also, the robot can response to human-supplied commands. This capability is achieved by an ensemble of neural networks whose activation and deactivation are controlled by a supervisory controller that is rule-based. The individual neural networks in the ensemble are trained to interpret visual information and perform primitive navigational tasks such as hallway following and landmark detection.

This publication has 6 references indexed in Scilit: