Submission Type: Regular Long Paper
Submission Track: Machine Learning for NLP
Submission Track 2: Interpretability, Interactivity, and Analysis of Models for NLP
Keywords: Natural Language Processing, Language Models, BERT, RoBERTa, Prompting, Adversarial Robustness
TL;DR: We make the surprising observation that tuning NLP models via prompting provides robustness against adversarial attacks.
Abstract: In recent years,
NLP practitioners have converged
on the following practice:
(i) import an off-the-shelf pretrained (masked) language model;
(ii) append a multilayer perceptron atop the CLS token's hidden representation
(with randomly initialized weights);
and (iii) fine-tune the entire model on a downstream task (MLP-FT).
This
procedure
has
produced massive gains
on standard NLP benchmarks,
but these models remain brittle, even to
mild adversarial perturbations.
In this work, we demonstrate surprising gains
in adversarial robustness enjoyed by
Model-tuning Via Prompts (MVP),
an alternative method of adapting to downstream tasks.
Rather than appending an MLP head to make output prediction, MVP appends a prompt template to the input, and makes prediction via text infilling/completion.
Across 5 NLP datasets, 4 adversarial attacks, and 3 different models,
MVP improves performance against adversarial
substitutions by an average of 8%
over standard methods and even outperforms
adversarial training-based state-of-art defenses by 3.5%.
By combining MVP with adversarial training,
we achieve further improvements in adversarial robustness
while maintaining performance on unperturbed examples.
Finally, we conduct ablations to investigate
the mechanism underlying these gains.
Notably, we find that the main causes of vulnerability of MLP-FT
can be attributed to the misalignment between pre-training and fine-tuning tasks,
and the randomly initialized MLP parameters.
Submission Number: 5093
Loading