Abstract: Purpose: This study investigates the verbalization of answers generated by knowledge graph question answering (KGQA) systems using large language models. In user-centric applications, such as dialogue systems and voice assistants, answer verbalization is an essential step to enhance the quality of interactions. Methodology: We experimented with different large language models to verbalize answers from knowledge-based question-answering systems. In particular, we fine-tuned the LLM models (T5, BART and PEGASUS) on different inputs, including SPARQL queries and triples, to determine which model performs best for answer verbalization. Findings: We found that fine-tuning language models and introducing additional knowledge such as SPARQL queries, achieve state-of-the-art results in verbalizing answers from KGQA systems. Value: Our approach can be used to generate answers verbalization for different KGQA systems, including dialogue systems or voice assistants.
Loading