Learning to Navigate in Mazes with Novel Layouts using Abstract Top-down Maps

Published: 15 May 2024, Last Modified: 14 Nov 2024RLC 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Navigation, Model-based Reinforcement Learning, Map-based Navigation
TL;DR: We integrated a hypermodel into a MuZero-based approach to enable zero-shot navigation in new environments using abstract 2-D maps.
Abstract: Learning navigation capabilities in different environments has long been one of the major challenges in decision-making. In this work, we focus on zero-shot navigation ability using given abstract 2-D top-down maps. Like human navigation by reading a paper map, the agent reads the map as an image when navigating in a novel layout, after learning to navigate on a set of training maps. We propose a model-based reinforcement learning approach for this multi-task learning problem, where it jointly learns a hypermodel that takes top-down maps as input and predicts the weights of the transition network. We use the DeepMind Lab environment and customize layouts using generated maps. Our method can adapt better to novel environments in zero-shot and is more robust to noise.
Submission Number: 341
Loading