Keywords: faithfulness, hallucination, summarization, natural language processing, large language models
Abstract: Large Language Models (LLMs) often suffer from hallucinations -- output content that is not grounded in the input context -- when performing long-form text generation tasks such as summarization. Prior works have shown that hallucinations can be be reduced by iteratively critiquing and refining previously generated outputs using either the same model or a more powerful teacher model as the critique. However, these approaches either require additional test-time compute or assume access to more powerful teacher models, making them costly and less practical. In this work, we propose Self Critique and Refinement-based Preference Optimization (SCRPO), which is a self-supervised training framework that first constructs a preference dataset by leveraging the LLM’s own critique and refinement capabilities, and then applies preference learning to improve the same LLM for faithful summarization. Experiments on three summarization benchmarks demonstrate that our approach outperforms state-of-the-art self-supervised learning methods in terms of faithfulness metrics while either maintaining or improving other metrics that measure the overall quality of the summary. Moreover, compared to test-time refinement, our approach not only improves efficiency but also results in more faithful summaries.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 15029
Loading