Improving Fairness of Large Language Models in Multi-document Summarization

ACL ARR 2024 December Submission1832 Authors

16 Dec 2024 (modified: 05 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Fairness in multi-document summarization (MDS) is crucial for providing comprehensive views across documents with diverse social attribute values. Previous works measure fairness in MDS at two levels: summary-level and corpus-level. While summary-level fairness focuses on individual summaries, corpus-level fairness focuses on a corpus of summaries. Recent approaches using prompting or policy gradients primarily focus on summary-level fairness. We propose FairPO, a preference tuning method that improves both summary-level and corpus-level fairness in MDS. To improve summary-level fairness, we propose to generate preference pairs by perturbing document sets based on social attributes. To improve corpus-level fairness, we propose fairness-aware preference tuning by dynamically adjusting the weights of preference pairs based on overrepresentation and underrepresentation of social attributes. Our experiments show that FairPO outperforms strong baselines while maintaining the critical qualities of summaries.
Paper Type: Short
Research Area: Summarization
Research Area Keywords: abstractive summarisation, multi-document summarization, model bias/unfairness mitigation
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 1832
Loading