Scaling Vision-and-Language Navigation With Offline RL

Published: 06 May 2024, Last Modified: 06 May 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: The study of vision-and-language navigation (VLN) has typically relied on expert trajectories, which may not always be available in real-world situations due to the significant effort required to collect them. On the other hand, existing approaches to training VLN agents that go beyond available expert data involve data augmentations or online exploration which can be tedious and risky. In contrast, it is easy to access large repositories of suboptimal offline trajectories. Inspired by research in offline reinforcement learning (ORL), we introduce a new problem setup of VLN-ORL which studies VLN using suboptimal demonstration data. We introduce a simple and effective reward-conditioned approach that can account for dataset suboptimality for training VLN agents, as well as benchmarks to evaluate progress and promote research in this area. We empirically study various noise models for characterizing dataset suboptimality among other unique challenges in VLN-ORL and instantiate it for the VLN⟳BERT and MTVM architectures in the R2R and RxR environments. Our experiments demonstrate that the proposed reward-conditioned approach leads to significant performance improvements, even in complex and intricate environments.
Submission Length: Long submission (more than 12 pages of main content)
Code: https://github.com/Valaybundele/RewardC-VLN-ORL
Assigned Action Editor: ~Michael_Bowling1
Submission Number: 1870
Loading