TransFool: An Adversarial Attack against Neural Machine Translation Models

Published: 25 Jun 2023, Last Modified: 25 Jun 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Deep neural networks have been shown to be vulnerable to small perturbations of their inputs, known as adversarial attacks. In this paper, we investigate the vulnerability of Neural Machine Translation (NMT) models to adversarial attacks and propose a new attack algorithm called TransFool. To fool NMT models, TransFool builds on a multi-term optimization problem and a gradient projection step. By integrating the embedding representation of a language model, we generate fluent adversarial examples in the source language that maintain a high level of semantic similarity with the clean samples. Experimental results demonstrate that, for different translation tasks and NMT architectures, our white-box attack can severely degrade the translation quality while the semantic similarity between the original and the adversarial sentences stays high. Moreover, we show that TransFool is transferable to unknown target models. Finally, based on automatic and human evaluations, TransFool leads to improvement in terms of success rate, semantic similarity, and fluency compared to the existing attacks both in white-box and black-box settings. Thus, TransFool permits us to better characterize the vulnerability of NMT models and outlines the necessity to design strong defense mechanisms and more robust NMT systems for real-life applications.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: - Deanonymizing author list - Adding acknowledgments section - Adding URL to the source code of the experiments - Revising the Human Evaluation Section
Assigned Action Editor: ~Alessandro_Sordoni1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 986
Loading