Robustifying Language Models with Test-Time AdaptationDownload PDF

Published: 04 Mar 2023, Last Modified: 21 Apr 2024ICLR 2023 Workshop on Trustworthy ML PosterReaders: Everyone
Keywords: Natural Language Processing, Deep Learning, Machine Learning, Robustness, Adversarial Attacks, Adversarial Defenses
TL;DR: A test-time adversarial defense that uses masked-language modelling to fix attacked sentences.
Abstract: Large-scale language models achieved state-of-the-art performance over a number of language tasks. However, they fail on adversarial language examples, which are sentences optimized to fool the language models but with similar semantic meanings for humans. While prior work focuses on making the language model robust at training time, retraining for robustness is often unrealistic for large-scale foundation models. Instead, we propose to make the language models robust at test time. By dynamically adapting the input sentence with predictions from masked words, we show that we can reverse many language adversarial attacks. Since our approach does not require any training, it works for novel tasks at test time and can adapt to novel adversarial corruptions. Visualizations and empirical results on two popular sentence classification datasets demonstrate that our method can repair adversarial language attacks over 65% of the time.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2310.19177/code)
0 Replies

Loading