Sim-to-Real Transfer via 3D Feature Fields for Vision-and-Language Navigation

Published: 05 Sept 2024, Last Modified: 16 Oct 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Vision-and-Language Navigation, 3D Feature Fields, Semantic Traversable Map
Abstract: Vision-and-language navigation (VLN) enables the agent to navigate to a remote location in 3D environments following the natural language instruction. In this field, the agent is usually trained and evaluated in the navigation simulators, lacking effective approaches for sim-to-real transfer. The VLN agents with only a monocular camera exhibit extremely limited performance, while the mainstream VLN models trained with panoramic observation, perform better but are difficult to deploy on most monocular robots. For this case, we propose a sim-to-real transfer approach to endow the monocular robots with panoramic traversability perception and panoramic semantic understanding, thus smoothly transferring the high-performance panoramic VLN models to the common monocular robots. In this work, the semantic traversable map is proposed to predict agent-centric navigable waypoints, and the novel view representations of these navigable waypoints are predicted through the 3D feature fields. These methods broaden the limited field of view of the monocular robots and significantly improve navigation performance in the real world. Our VLN system outperforms previous SOTA monocular VLN methods in R2R-CE and RxR-CE benchmarks within the simulation environments and is also validated in real-world environments, providing a practical and high-performance solution for real-world VLN.
Supplementary Material: zip
Spotlight Video: mp4
Code: https://github.com/MrZihan/Sim2Real-VLN-3DFF
Publication Agreement: pdf
Student Paper: yes
Submission Number: 160
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview