Abstract: Audio adversarial examples targeting automatic speech recognition systems have recently been made possible in different tasks, such as speech-to-text translation and speech classification. Here we aim to explore the robustness of these audio adversarial examples generated via two attack strategies by applying different signal processing methods to recover the original audio sequence. In addition, we also show that by inspecting the temporal consistency in speech signals, we can potentially identify non-adaptive audio adversarial examples considered in our experiments with a promising success rate.
TL;DR: Evaluating robustness of audio adversarial examples by applying different mitigation and detection methods.
Keywords: audio adversarial example, mitigation, detection, machine learning