Stabilizing Label Assignment for Speech Separation by Self-Supervised Pre-TrainingDownload PDFOpen Website

2021 (modified: 19 Apr 2023)Interspeech 2021Readers: Everyone
Abstract: Speech separation has been well developed, with the very successful permutation invariant training (PIT) approach, although the frequent label assignment switching happening during PIT training remains to be a problem when better convergence speed and achievable performance are desired. In this paper, we propose to perform self-supervised pre-training to stabilize the label assignment in training the speech separation model. Experiments over several types of self-supervised approaches, several typical speech separation models and two different datasets showed that very good improvements are achievable if a proper self-supervised approach is chosen.
0 Replies

Loading