Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions

ACL ARR 2024 June Submission3957 Authors

16 Jun 2024 (modified: 08 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) have introduced novel opportunities for text comprehension and generation. Yet, they are vulnerable to adversarial perturbations and data poisoning attacks, particularly in tasks like text classification and translation. However, the adversarial robustness of abstractive text summarization models remains less explored. In this work, we unveil a novel approach by exploiting the inherent lead bias in summarization models, to perform adversarial perturbations. Furthermore, we introduce an innovative application of influence functions, to execute data poisoning, which compromises the model's integrity. This approach not only shows a skew in the models' behavior to produce desired outcomes but also shows a new behavioral change, where models under attack tend to generate extractive summaries rather than abstractive summaries.
Paper Type: Long
Research Area: Summarization
Research Area Keywords: abstractive summarisation; multi-document summarization; extractive summarization; adversarial attacks/examples/training; data influence; robustness
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 3957
Loading