Abstract: Large Language Models(LLMs) have demonstrated remarkable capabilities in abstractive summarization tasks. However, traditional one-shot generation approaches and self-iterative LLM methods often suffer from issues such as overconfidence, inconsistent feedback, and overcorrection. To address these limitations, we propose RevalSum—a novel LLM-based iterative summarization framework driven by objective evaluators. RevalSum integrates an external multi-dimensional evaluator that provides fine-grained revision suggestions after each generation step, guiding the LLM to perform targeted refinements. This approach effectively overcomes the key shortcomings of existing self-refinement methods and achieves strong performance across multiple evaluation metrics on the CNN/DM and XSum datasets.
Paper Type: Short
Research Area: Summarization
Research Area Keywords: summarization,LLMs,
Contribution Types: Model analysis & interpretability, Surveys
Languages Studied: English
Submission Number: 8020
Loading