Securing Voice Authentication Applications Against Targeted Data Poisoning

Published: 2025, Last Modified: 04 Nov 2025IEEE Internet Comput. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep neural network-based voice authentication systems are promising but remain vulnerable to targeted data poisoning attacks where adversaries substitute legitimate user utterances to gain unauthorized access. To address this, we propose a novel defense framework that integrates a regularized convolutional neural network with a K-nearest neighbors classifier, enhanced with stratified cross-validation and class weighting to counteract data imbalance inherent in such attacks. Evaluated on real-world datasets under realistic attack scenarios, our framework demonstrates significant robustness. It achieves accurate authentication even when as little as 5% of the training data is poisoned, outperforming existing state-of-the-art methods.
Loading