Learning Exploration Policies for NavigationDownload PDF

Published: 21 Dec 2018, Last Modified: 29 Sept 2024ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Numerous past works have tackled the problem of task-driven navigation. But, how to effectively explore a new environment to enable a variety of down-stream tasks has received much less attention. In this work, we study how agents can autonomously explore realistic and complex 3D environments without the context of task-rewards. We propose a learning-based approach and investigate different policy architectures, reward functions, and training paradigms. We find that use of policies with spatial memory that are bootstrapped with imitation learning and finally finetuned with coverage rewards derived purely from on-board sensors can be effective at exploring novel environments. We show that our learned exploration policies can explore better than classical approaches based on geometry alone and generic learning-based exploration techniques. Finally, we also show how such task-agnostic exploration can be used for down-stream tasks. Videos are available at https://sites.google.com/view/exploration-for-nav/.
Keywords: Exploration, navigation, reinforcement learning
Code: [![github](/images/github_icon.svg) taochenshh/exp4nav](https://github.com/taochenshh/exp4nav) + [![Papers with Code](/images/pwc_icon.svg) 1 community implementation](https://paperswithcode.com/paper/?openreview=SyMWn05F7)
Data: [House3D Environment](https://paperswithcode.com/dataset/house3d-environment)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/learning-exploration-policies-for-navigation/code)
22 Replies

Loading