Abstract: Despite recent success on various tasks, deep learning techniques still perform poorly on adversarial examples with small perturbations. While optimization methods for adversarial attacks are well-explored in the field of computer vision, it is impractical to directly apply them in natural language processing due to the discrete nature of the text. To address the problem, we propose a unified framework to extend the existing optimization-based method in the vision domain to craft textual adversarial samples. In this framework, continuously optimized perturbations are added to the embedding layer and amplified in the forward propagation process. Then the final perturbed latent representations are decoded with a masked language model head to obtain potential adversarial samples. In this paper, we instantiate our framework with an attack algorithm named Textual Projected Gradient Descent (T-PGD). We find our algorithm effective even using proxy gradient information.Therefore, we perform more challenging transfer black-box attacks and conduct comprehensive experiments to evaluate our attack algorithm with BERT, RoBERTa, and ALBERT on three benchmark datasets. Experimental results demonstrate that our method achieves an overall better performance and produces more fluent and grammatical adversarial samples compared to strong baseline methods. All the code and data will be made public.
Paper Type: long
Research Area: Interpretability and Analysis of Models for NLP
0 Replies
Loading