Pushing the Limits of Self-Supervised Speaker Verification using Regularized Distillation Framework

Published: 01 Jan 2023, Last Modified: 19 Apr 2025ICASSP 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Training robust speaker verification systems without speaker labels has long been a challenging task. Previous studies observed a large performance gap between self-supervised and fully supervised methods. In this paper, we apply a non-contrastive self-supervised learning framework called DIstillation with NO labels (DINO) and propose two regularization terms applied to embeddings in DINO. One regularization term guarantees the diversity of the embeddings, while the other regularization term decorrelates the variables of each embedding. The effectiveness of various data augmentation techniques are explored, on both time and frequency domain. A range of experiments conducted on the VoxCeleb datasets demonstrate the superiority of the regularized DINO framework in speaker verification. Our method achieves the stateof-the-art speaker verification performance under a singlestage self-supervised setting on VoxCeleb.
Loading