Enhancing Answers Verbalization Using Large Language Models

Published: 01 Jan 2024, Last Modified: 19 May 2025SEMANTICS 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Purpose: This study investigates the verbalization of answers generated by knowledge graph question answering (KGQA) systems using large language models. In user-centric applications, such as dialogue systems and voice assistants, answer verbalization is an essential step to enhance the quality of interactions. Methodology: We experimented with different large language models to verbalize answers from knowledge-based question-answering systems. In particular, we fine-tuned the LLM models (T5, BART and PEGASUS) on different inputs, including SPARQL queries and triples, to determine which model performs best for answer verbalization. Findings: We found that fine-tuning language models and introducing additional knowledge such as SPARQL queries, achieve state-of-the-art results in verbalizing answers from KGQA systems. Value: Our approach can be used to generate answers verbalization for different KGQA systems, including dialogue systems or voice assistants.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview