Detecting Adversarial Text Attacks via SHapley Additive exPlanationsDownload PDF

Anonymous

16 May 2021 (modified: 05 May 2023)ACL ARR 2021 May Blind SubmissionReaders: Everyone
Abstract: State-of-the-art machine learning models are prone to adversarial attacks: maliciously crafted inputs to fool the model into making a wrong prediction, often with high confidence. While defense strategies have been extensively explored in the computer vision domain, research in natural language processing still lacks techniques to make models resilient to adversarial text inputs. We propose an adversarial detector leveraging Shapley additive explanations against text attacks. Our approach outperforms the current state-of-the-art detector by around 19% F1-score on the IMDb and 14% on the SST-2 datasets while also showing competitive performance on AG_News and Yelp Polarity. Furthermore, we prove the detector to only require a low amount of training samples and, in some cases, to generalize to different datasets without needing to retrain.
Software: zip
0 Replies

Loading