Using audio-visual information to understand speaker activity: Tracking active speakers on and off screenDownload PDFOpen Website

Published: 01 Jan 2018, Last Modified: 05 Jul 2023ICASSP 2018Readers: Everyone
Abstract: We present a system that associates faces with voices in a video by fusing information from the audio and visual signals. The thesis underlying our work is that an extreme simple approach to generating (weak) speech clusters can be combined with strong visual signals to effectively associate faces and voices by aggregating statistics across a video. This approach does not need any training data specific to this task and leverages the natural coherence of information in the audio and visual streams. It is particularly applicable to tracking speakers in videos on the web where a priori information about the environment (e.g., number of speakers, spatial signals for beamforming) is not available.
0 Replies

Loading