From Tweaks to Turmoil: Attacks against Text Summarization Models through Lead Bias and Influence FunctionsDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Large Language Models (LLMs) have introduced novel opportunities for text comprehension and generation. Yet, they are vulnerable to adversarial perturbations and data poisoning attacks, particularly in tasks like text classification and translation, as evidenced by numerous studies. However, the adversarial robustness of Text Summarization models remains less explored. In this work, we unveil a novel approach by exploiting the inherent lead bias in summarization models, to perform adversarial perturbations. Furthermore, we introduce an innovative application of influence functions, to execute data poisoning, which compromises models’ integrity. This approach not only shows a skew in the model’s behavior to produce desired outcomes, but also shows a new behavioral change, where models under attack tend to generate extractive summaries rather than abstractive summaries
Paper Type: long
Research Area: Summarization
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview