Abstract: Large language models (LLMs) have shown impressive performance in generating concise and fluent summaries. However, the generated summaries can still contain information that is inconsistent with the input article, which is known as faithful hallucination. This paper proposes a simple and effective approach to improve faithfulness in abstractive summarisation by leveraging attribution at inference time. Our method incorporates attribution mechanism to explicitly identify the most influential input sentences that contribute to the generated summary and steers the model to refine the summary based on these attributed sentences. We evaluate our approach on multiple summarisation benchmarks, including CNN/DailyMail, XSum, and CCSum, measuring both faithfulness and similarity to the reference. Our experiment results show that attribution-guided summarisation consistently reduces faithfulness hallucination compared with several decoding-based approaches, while maintaining comparable semantic similarity to the reference.
Paper Type: Short
Research Area: Summarization
Research Area Keywords: Summarization, Generation
Languages Studied: English
Submission Number: 5185
Loading