Knowledge-driven Scene Priors for Semantic Audio-Visual Embodied NavigationDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Scene Priors, Modular Training, Reinforcement Learning, Audio-Visual, Robot Navigation, Embodied
Abstract: Generalisation to unseen contexts remains a challenge for embodied navigation agents. In the context of semantic audio-visual navigation (SAVi) tasks, generalisation includes both generalising to unseen indoor visual scenes as well as generalising to unheard sounding objects. Previous SAVi task definitions do not include evaluation conditions on truly novel sounding objects, resorting instead to evaluating agents on unheard sound clips of known objects; meanwhile, previous SAVi methods do not include explicit mechanisms for incorporating domain knowledge about object and region semantics. These weaknesses limit the development and assessment of models' abilities to generalise their learned experience. In this work, we introduce the use of knowledge-driven scene priors in the semantic audio-visual embodied navigation task: we combine semantic information from our novel knowledge graph that encodes object-region relations, spatial knowledge from dual Graph Convolutional Networks, and background knowledge from a series of pre-training tasks|all within a reinforcement learning framework for audio-visual navigation. We define a new audio-visual navigation sub-task, where agents are evaluated on novel sounding objects, as opposed to unheard clips of known objects. We show state-of-the-art results on multiple semantic audio-visual navigation benchmarks, within the Habitat-Matterport3D simulator, where we also show improvements in generalisation to unseen regions and novel sounding objects. We release our code, knowledge graph, and dataset in the supplementary material.
One-sentence Summary: We use scene priors to improve unseen generalisation, in semantic audio visual navigation, and release a new task dataset and knowledge graph.
Supplementary Material: zip
6 Replies

Loading