Abstract: Question-answering based text summarization can produce personalized and specific summaries; however, the primary challenge is the generation and selection of questions that users expect the summary to answer. Large language models (LLMs) provide an automatic method for generating these questions from the original text. By prompting the LLM to answer these selected questions based on the original text, high-quality summaries can be produced. In this paper, we experiment with an approach for question generation, selection, and text summarization using the LLM tool GPT4o. We also conduct a comparative study of existing summarization approaches and evaluation metrics to understand how to produce personalized and useful summaries. Based on the experiment results, we explain why question-answering based text summarization achieves better performance.
Loading