Abstract: Visual navigation through complex environments is a challenging task, yet ants navigate in them easily and accurately with low-resolution vision and limited neural resources. Inspired by ants, we have developed a series of visual familiarity-based navigation algorithms for teach-and-repeat style navigation. These algorithms learn the egocentric visual appearance of the world on a training route and, during subsequent navigation, move in the direction that leads to the best match of the current view with one of the scenes encountered during training. Because they do not depend on accurate feature extraction or map building these algorithms use relatively unprocessed low-resolution panoramic views, making them computationally efficient. However, the computational cost of comparing the current view with all training images is still quite large and the algorithm can get confused if the path crosses itself. Here we develop and test novel algorithms where the agent uses sequence information to adaptively select a window of route memories to navigate with. This algorithm is shown to successfully navigate real-world routes through ant-like habitats, including a figure of 8-route, as well as a long route along a corridor, with all computation performed onboard the robot.
Loading