Principled Content Selection to Generate Diverse and Personalized Multi-Document Summaries

ACL ARR 2025 February Submission1508 Authors

13 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: While large language models (LLMs) are increasingly capable of handling longer contexts, recent work has demonstrated that they exhibit the \emph{"lost in the middle"} phenomenon \cite{liu2024lost} of unevenly attending to different parts of the provided context. This hinders their ability to cover diverse source material in multi-document summarization, as noted in the \diversumm benchmark \cite{huang2024embrace}. In this work, we contend that principled content selection is a simple way to increase source coverage on this task. As opposed to prompting an LLM to perform the summarization in a single step, we explicitly divide the task into three steps---(1) reducing document collections to atomic key points, (2) using determinantal point processes (DPP) to perform select key points that prioritize diverse content, and (3) rewriting to the final summary. By combining prompting steps, for extraction and rewriting, with principled techniques, for content selection, we consistently improve source coverage on the \diversumm benchmark across various LLMs. Finally, we also show that by incorporating relevance to a provided user intent into the DPP kernel, we can generate \emph{personalized} summaries that cover \emph{relevant} source information while retaining coverage.
Paper Type: Long
Research Area: Summarization
Research Area Keywords: query-focused summarization; multi-document summarization; long-form summarization;
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 1508
Loading