Move2Hear: Active Audio-Visual Source SeparationDownload PDFOpen Website

01 Nov 2022OpenReview Archive Direct UploadReaders: Everyone
Abstract: We introduce the active audio-visual source separation problem, where an agent must move intelligently in order to better isolate the sounds coming from an object of in- terest in its environment. The agent hears multiple audio sources simultaneously (e.g., a person speaking down the hall in a noisy household) and it must use its eyes and ears to automatically separate out the sounds originating from a target object within a limited time budget. Towards this goal, we introduce a reinforcement learning approach that trains movement policies controlling the agent’s camera and microphone placement over time, guided by the improvement in predicted audio separation quality. We demonstrate our approach in scenarios motivated by both augmented real- ity (system is already co-located with the target object) and mobile robotics (agent begins arbitrarily far from the target object). Using state-of-the-art realistic audio-visual simula- tions in 3D environments, we demonstrate our model’s ability to find minimal movement sequences with maximal payoff for audio source separation. Project: http://vision. cs.utexas.edu/projects/move2hear.
0 Replies

Loading