Code-Mixing on Sesame Street: Dawn of the Adversarial PolyglotsDownload PDF

29 Jan 2021 (modified: 05 Jun 2021)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Keywords: adversarial, attack, multilingual, code-mixing, robustness, natural, language, processing
TL;DR: We present, to our knowledge, the first two adversarial attacks for multilingual models and an efficient adversarial training scheme to improve model robustness to code-mixed adversaries.
Abstract: Multilingual models have demonstrated impressive cross-lingual transfer performance. However, test sets like XNLI are monolingual at the example level. In multilingual communities, it is common for polyglots to code-mix when conversing with each other. Inspired by this phenomenon, we present two strong black-box adversarial attacks (one word-level, one phrase-level) for multilingual models that push their ability to handle code-mixed sentences to the limit. The former uses bilingual dictionaries to propose perturbations and translations of the clean example for sense disambiguation. The latter directly aligns the clean example with its translations before extracting phrases as perturbations. Our phrase-level attack has a success rate of 89.75% against XLM-R Large, bringing its average accuracy of 79.85 down to 8.18 on XNLI. Finally, we propose an efficient adversarial training scheme that trains in the same number of steps as the original model and show that it improves model accuracy.
1 Reply

Loading