Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language ModelsDownload PDF

Published: 11 Oct 2021, Last Modified: 08 Sept 2024NeurIPS 2021 Datasets and Benchmarks Track (Round 2)Readers: Everyone
Keywords: multi-task benchmark dataset, adversarial robustness, language models, natural language understanding
TL;DR: We propose an adversarial robustness benchmark dataset Adversarial GLUE (AdvGLUE) to quantitatively and thoroughly understand the model vulnerabilities to different types of adversarial transformations.
Abstract: Large-scale pre-trained language models have achieved tremendous success across a wide range of natural language understanding (NLU) tasks, even surpassing human performance. However, recent studies reveal that the robustness of these models can be challenged by carefully crafted textual adversarial examples. While several individual datasets have been proposed to evaluate model robustness, a principled and comprehensive benchmark is still missing. In this paper, we present Adversarial GLUE (AdvGLUE), a new multi-task benchmark to quantitatively and thoroughly explore and evaluate the vulnerabilities of modern large-scale language models under various types of adversarial attacks. In particular, we systematically apply 14 textual adversarial attack methods to GLUE tasks to construct AdvGLUE, which is further validated by humans for reliable annotations. Our findings are summarized as follows. (i) Most existing adversarial attack algorithms are prone to generating invalid or ambiguous adversarial examples, with around 90% of them either changing the original semantic meanings or misleading human annotators as well. Therefore, we perform a careful filtering process to curate a high-quality benchmark. (ii) All the language models and robust training methods we tested perform poorly on AdvGLUE, with scores lagging far behind the benign accuracy. We hope our work will motivate the development of new adversarial attacks that are more stealthy and semantic-preserving, as well as new robust language models against sophisticated adversarial attacks. AdvGLUE is available at https://adversarialglue.github.io.
Supplementary Material: zip
URL: https://adversarialglue.github.io.
Contribution Process Agreement: Yes
Dataset Url: https://adversarialglue.github.io.
License: Our dataset will be distributed under the CC BY-SA 4.0 license.
Author Statement: Yes
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/adversarial-glue-a-multi-task-benchmark-for/code)
10 Replies

Loading