everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Achieving fast audio-visual embodied navigation in 3D environments is still a challenging problem. Existing methods typically rely on separate audio and visual data processing merged in late stages, leading to suboptimal path planning and increased time to locate targets. In this paper, we introduce FavEN, a novel transformer and mamba architecture that combines audio and visual data into $\textit{early fusion}$ tokens. These tokens are passed through the entire network from the initial layer on and cross-attend to both data modalities. The effect of our early fusion approach is that the network can correlate information from the two data modalities from the get-go, which vastly improves its downstream navigation performance. We demonstrate this empirically through experimental results on the Replica and Matterport3D benchmarks. Furthermore, for the first time, we demonstrate the effectiveness of early fusion in improving the path search speed of audio-visual embodied navigation systems in real-world settings. Across various benchmarks, in comparison to previous approaches, FavEN reduces the search time by 93.6% and improves the SPL metrics by 10.4 and 6.5 on heard and unheard sounds.