Single Channel Source Separation in the Wild – Conversational Speech in Realistic Environments

Emil Berger, Barbara Schuppler, Martin Hagmüller, Franz Pernkopf

Published: 01 Jan 2023, Last Modified: 04 Nov 2025Speech Communication - 15th ITG ConferenceEveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recent progress in Single Channel Source Separation (SCSS) using deep neural networks led to impressive performance gains while also increasing the model sizes, requiring tremendous data resources. This demand is covered by artificially composed speech and noise mixtures that do not capture real-life characteristics of conversations taking place in noisy environments. This paper introduces a new dataset containing task-oriented dialogues spoken in a realistic environment and presents experimental results for two SCSS architectures - the Conv-TasNet and the transformer-based MossFormer. Overall, we observe a severe drop in performance of up to 4.3dB (SI-SDR improvement) for the 8kHz variant of the Conv-TasNet. For speaker pairs of homogeneous sex, the difference is even higher of up to 6dB. Only the model using 16kHz sample rate performs on a comparable level for speaker pairs of mixed sex. Our findings illustrate the need of using realistic data for both, training and evaluating.
Loading