Abstract: Evasion attacks in adversarial machine learning (AML) have been well-established for multi-class classification problems. However, for multi-label audio classification, several considerations and unique challenges exist in evasion AML that the attacker needs to consider. First, the attacker must cater the attack from the multi-class scenario assumed by the vast majority of evasion attacks to multi-label AML. Second, the attacker should consider creating imper-ceptible adversarial examples to reduce suspicion of the human element by factoring in properties inherent to the audio data set and model. Lastly, with the proliferation of AML techniques, the attacker should assume that the target model will implement a deep neural network (DNN) defense and should attempt a strategy to overcome those defenses. In this paper, we show how an attacker would cater and design an attack against multi-label audio classification using stochastic DNNs, highlighting some considerations for linking AML techniques to acoustic-based metrics.
Loading