Bootstrapping Unsupervised Deep Music Separation from Primitive Auditory Grouping PrinciplesDownload PDF

Published: 02 Jul 2020, Last Modified: 05 May 2023SAS 2020Readers: Everyone
Keywords: auditory scene analysis, deep learning, cocktail party problem, self-supervised learning, bootstrapping
TL;DR: We train deep audio separation models without ground-truth using noisy labels produced by brain-inspired primitive audio source separation algorithms.
Abstract: Separating an audio scene, such as a cocktail party with multiple overlapping voices, into meaningful components (e.g., individual voices) is a core task in computer audition, analogous to image segmentation in computer vision. Deep networks are the state-of-the-art approach. They are typically trained on synthetic audio mixtures made from isolated sound source recordings so that ground truth for the separation is known. However, the vast majority of available audio is not isolated. The human brain performs an initial segmentation of the audio scene using primitive cues that are broadly applicable to many kinds of sound sources. We present a method to train a deep source separation model in an unsupervised way by bootstrapping using multiple primitive cues. We apply our method to train a network on a large set of unlabeled music recordings to separate vocals from accompaniment without the need for ground truth isolated sources or artificial training mixtures. A companion notebook with audio examples and code for experiments is available: https://github.com/pseeth/bootstrapping-computer-audition.
Double Submission: No
4 Replies

Loading