Certified Robustness in NLP Under Bounded Levenshtein Distance

Published: 28 Jun 2024, Last Modified: 25 Jul 2024NextGenAISafety 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Robustness verification, Text classifiers, Lipschitz constant
TL;DR: We propose the first method for Lipschitz constant based verification with Levenshtein distance constraints in NLP.
Abstract: Natural Language Processing (NLP) models suffer from small perturbations, that if chosen adversarially, can dramatically change the output of the model. Verification methods can provide robustness certificates against such adversarial perturbations, by computing a sound lower bound on the robust accuracy. Nevertheless, existing verification methods in NLP incur in prohibitive costs and cannot practically handle Levenshtein distance constraints. We propose the first method for computing the Lipschitz constant of convolutional classifiers with respect to the Levenshtein distance. We use this Lipschitz constant estimation method for training 1-Lipschitz classifiers. This enables computing the certified radius of a classifier in a single forward pass. Our method, LipsLev, is able to obtain $38.80$% and $13.93$% verified accuracy at distance $1$ and $2$ respectively in the AG-News dataset. We believe our work can open the door to more efficiently training and verifying NLP models.
Submission Number: 130
Loading