Do Not Mimic My Voice : Speaker Identity Unlearning for Zero-Shot Text-to-Speech

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Implementation of machine unlearning in zero-shot text-to-speech to enhance safety in model usage and voice privacy.
Abstract: The rapid advancement of Zero-Shot Text-to-Speech (ZS-TTS) technology has enabled high-fidelity voice synthesis from minimal audio cues, raising significant privacy and ethical concerns. Despite the threats to voice privacy, research to selectively remove the knowledge to replicate unwanted individual voices from pre-trained model parameters has not been explored. In this paper, we address the new challenge of speaker identity unlearning for ZS-TTS systems. To meet this goal, we propose the first machine unlearning frameworks for ZS-TTS, especially Teacher-Guided Unlearning (TGU), designed to ensure the model forgets designated speaker identities while retaining its ability to generate accurate speech for other speakers. Our proposed methods incorporate randomness to prevent consistent replication of forget speakers' voices, assuring unlearned identities remain untraceable. Additionally, we propose a new evaluation metric, speaker-Zero Retrain Forgetting (spk-ZRF). This assesses the model's ability to disregard prompts associated with forgotten speakers, effectively neutralizing its knowledge of these voices. The experiments conducted on the state-of-the-art model demonstrate that TGU prevents the model from replicating forget speakers' voices while maintaining high quality for other speakers. The demo is available at https://speechunlearn.github.io/ .
Lay Summary: Zero-Shot Text-to-Speech (ZS-TTS) models can now copy anyone’s voice from a 3-second clip, posing serious privacy and fraud risks. Once trained, today’s ZS-TTS systems cannot selectively prevent generating unwanted voice. We present the first “Speaker-Identity Unlearning” framework that lets developers erase unwanted voices while unharming all other abilities. Our Teacher-Guided Unlearning modifies the model so any request to clone a protected speaker is rendered in a random, untraceable voice. A new score, speaker-Zero Retrain Forgetting, confirms the attempt to clone an unwanted voice results in random voice for protected (or "forgotten") speakers but remains identical for the other ("remain") speakers. This method drives speaker similarity of forgotten voices down to the level of unrelated speakers while keeping word accuracy and naturalness unchanged. The performance is consistent when scaling to unlearn speakers of various sizes or unseen speakers the model has never been trained on. This means users can demand “do not mimic my voice” for voice privacy.
Link To Code: https://github.com/mokcho/spk_id_unlearn_icml2025
Primary Area: Social Aspects->Safety
Keywords: machine unlearning, zero-shot tts, voice privacy
Submission Number: 9258
Loading