Keywords: embodied dialog, vln, vision-and-language-navigation, task-oriented-dialog, human-ai-collaboration
Abstract: We introduce DialNav, a novel collaborative embodied dialog task, where a navigation agent (Navigator) and a remote guide (Guide) engage in multi-turn dialog to reach a goal location.
Unlike prior work, DialNav aims for holistic evaluation and requires the Guide to infer the Navigator's location, making communication essential for task success.
To support this task, we collect and release Remote Assistance in Navigation (RAIN) dataset, human-human dialog paired with navigation trajectories in photorealistic environments.
We design a comprehensive benchmark to evaluate both navigation and dialog, and conduct extensive experiments analyzing the impact of different Navigator and Guide models.
We highlight key challenges and publicly release the dataset, code, and evaluation framework to foster future research in embodied dialog.
Submission Number: 5
Loading