Evaluation of Question-Answering Based Text Summarization using LLM Invited Paper

Published: 01 Jan 2024, Last Modified: 19 Feb 2025AITest 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Question-answering based text summarization can produce personalized and specific summaries; however, the primary challenge is the generation and selection of questions that users expect the summary to answer. Large language models (LLMs) provide an automatic method for generating these questions from the original text. By prompting the LLM to answer these selected questions based on the original text, high-quality summaries can be produced. In this paper, we experiment with an approach for question generation, selection, and text summarization using the LLM tool GPT4o. We also conduct a comparative study of existing summarization approaches and evaluation metrics to understand how to produce personalized and useful summaries. Based on the experiment results, we explain why question-answering based text summarization achieves better performance.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview