IMPGA: An Effective and Imperceptible Black-Box Attack Against Automatic Speech Recognition SystemsOpen Website

Published: 01 Jan 2022, Last Modified: 13 Nov 2023APWeb/WAIM (3) 2022Readers: Everyone
Abstract: Machine learning systems are ubiquitous in our lives, so it is necessary to study their vulnerabilities to improve the reliability and security of the systems. In recent years, adversarial example attacks have attracted considerable attention with remarkable success in fooling machine learning systems, especially in computer vision. For automatic speech recognition (ASR) models, the current state-of-the-art attack mainly focuses on white-box methods, which assume that the adversary has full access to the details inside the model. However, this assumption is often incorrect in practice. The existing black-box attack methods have the disadvantages of low attack success rate, perceptible adversarial examples and long computation time. Constructing black-box adversarial examples for ASR systems remains a very challenging problem. In this paper, we explore the effectiveness of adversarial attacks against ASR systems. Inspired by the idea of psychoacoustic models, we design a method called Imperceptible Genetic Algorithm (IMPGA) attack based on the psychoacoustic principle of auditory masking, which is combined with genetic algorithms to address this problem. In addition, an adaptive coefficient for auditory masking is proposed in the method to balance the attack success rate with the imperceptibility of the generated adversarial samples, while it is applied to the fitness function of the genetic algorithm. Experimental results indicate that our method achieves a 38% targeted attack success rate, while maintaining 92.73% audio file similarity and reducing the required computational time. We also demonstrate the effectiveness of each improvement through ablation experiments.
0 Replies

Loading