Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Towards Mitigating Audio Adversarial Perturbations
Zhuolin Yang, Bo Li, Pin-Yu Chen, Dawn Song
Feb 12, 2018 (modified: Feb 13, 2018)ICLR 2018 Workshop Submissionreaders: everyone
Abstract:Audio adversarial examples targeting automatic speech recognition systems have recently been made possible in different tasks, such as speech-to-text translation and speech classification. Here we aim to explore the robustness of these audio adversarial examples generated via two attack strategies by applying different signal processing methods to recover the original audio sequence. In addition, we also show that by inspecting the temporal consistency in speech signals, we can potentially identify non-adaptive audio adversarial examples considered in our experiments with a promising success rate.
TL;DR:Evaluating robustness of audio adversarial examples by applying different mitigation and detection methods.
Keywords:audio adversarial example, mitigation, detection, machine learning
Enter your feedback below and we'll get back to you as soon as possible.