Coverage-based Fairness in Multi-document Summarization

ACL ARR 2024 June Submission3801 Authors

16 Jun 2024 (modified: 03 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Fairness in multi-document summarization (MDS) measures whether a system can generate a summary fairly representing information from documents with different social attribute values. Fairness in MDS is crucial since a fair summary can offer readers a comprehensive view. Previous works focus on quantifying summary-level fairness using Proportional Representation, a fairness measure based on Statistical Parity. However, Proportional Representation does not consider redundancy in input documents and overlooks corpus-level unfairness. In this work, we propose a new summary-level fairness measure, Equal Coverage, which is based on coverage of documents with different social attribute values and considers the redundancy within documents. To detect the corpus-level unfairness, we propose a new corpus-level measure, Coverage Parity. Our human evaluations show that our measures align with the human perception of fairness. Using our measures, we evaluate the fairness of ten different LLMs. We find that Llama2 is the fairest among all evaluated LLMs. We also find that almost all LLMs overrepresent different social attribute values.
Paper Type: Long
Research Area: Summarization
Research Area Keywords: abstractive summarisation, multi-document summarization, fairness evaluation
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 3801
Loading