Evaluating Audiovisual Source Separation in the Context of Video ConferencingDownload PDFOpen Website

Published: 01 Jan 2019, Last Modified: 06 Nov 2023INTERSPEECH 2019Readers: Everyone
Abstract: Source separation involving mono-channel audio is a challenging problem, in particular for speech separation where source contributions overlap both in time and frequency. This task is of high interest for applications such as video conferencing. Recent progress in machine learning has shown that the combination of visual cues, coming from the video, can increase the source separation performance. Starting from a recently designed deep neural network, we assess its ability and robustness to separate the visible speakers’ speech from other interfering speeches or signals. We test it for different configuration of video recordings where the speaker’s face may not be fully visible. We also asses the performance of the network with respect to different sets of visual features from the speakers’ faces.
0 Replies

Loading