Real-Time Neural Voice CamouflageDownload PDF

Published: 28 Jan 2022, Last Modified: 22 Oct 2023ICLR 2022 OralReaders: Everyone
Keywords: automatic speech recognition, predictive models, privacy
Abstract: Automatic speech recognition systems have created exciting possibilities for applications, however they also enable opportunities for systematic eavesdropping.We propose a method to camouflage a person's voice from these systems without inconveniencing the conversation between people in the room. Standard adversarial attacks are not effective in real-time streaming situations because the characteristics of the signal will have changed by the time the attack is executed. We introduce predictive adversarial attacks, which achieves real-time performance by forecasting the attack vector that will be the most effective in the future. Under real-time constraints, our method jams the established speech recognition system DeepSpeech 3.9x more than online projected gradient descent as measured through word error rate, and 6.6x more as measured through character error rate. We furthermore demonstrate our approach is practically effective in realistic environments with complex scene geometries.
One-sentence Summary: We introduce predictive attacks, which achieve real-time performance in breaking automatic speech recognition models by forecasting the attack vector that will be the most effective in the future.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2112.07076/code)
9 Replies

Loading