Loc4Plan: Locating Before Planning for Outdoor Vision and Language Navigation

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Vision and Language Navigation (VLN) is a challenging task that requires agents to understand instructions and navigate to the destination in a visual environment. One of the key challenges in outdoor VLN is keeping track of which part of the instruction was completed. To alleviate this problem, previous works mainly focus on grounding the natural language to the visual input, but neglecting the crucial role of the agent’s spatial position information in the grounding process. In this work, we first explore the substantial effect of spatial position locating on the grounding of outdoor VLN, drawing inspiration from human navigation. In real-world navigation scenarios, before planning a path to the destination, humans typically need to figure out their current location. This observation underscores the pivotal role of spatial localization in the navigation process. In this work, we introduce a novel framework, Locating before Planning (Loc4Plan), designed to incorporate spatial perception for action planning in outdoor VLN tasks. The main idea behind Loc4Plan is to perform the spatial localization before planning a decision action based on corresponding guidance, which comprises a block-aware spatial locating (BAL) module and a spatial-aware action planning (SAP) module. Specifically, to help the agent perceive its spatial location in the environment, we propose to learn a position predictor that measures how far the agent is from the next intersection for reflecting its position, which is achieved by the BAL module. After the locating process, we propose the SAP module to incorporate spatial information to ground the corresponding guidance and enhance the precision of action planning. Extensive experiments on the Touchdown and map2seq datasets show that the proposed Loc4Plan outperforms the SOTA methods.
Primary Subject Area: [Experience] Multimedia Applications
Secondary Subject Area: [Content] Vision and Language
Relevance To Conference: Our method studies the task of Vision and Language Navigation (VLN), which requires agents to understand instructions and navigate to the destination in a visual environment. It involves two modalities of vision and language, which is an important task in the multimedia information process. Our approach is built upon the principle of "localization before decision-making," providing a new paradigm for processing multimodal information. Specifically, we designed the Block-level Spatial Localization (BAL) module to leverage visual cues and achieve spatial awareness at the block level for localizing the agent's approximate position. Complementing the visual modality, our proposed Hierarchical Semantic Association (HSA) module focuses on the linguistic modality, identifying corresponding navigation instructions through semantic association. Finally, visual features and text features interact with each other for predicting navigation planning. By combining linguistic information with the visual spatial position information, our method effectively guides subsequent decision-making, showcasing our novel contribution to multimodal interaction for VLN.
Supplementary Material: zip
Submission Number: 4473
Loading