Optimizing Knowledge Graph to Text as Knowledge-Augmented Prompt with Alignment Tuning for Question Answering

ACL ARR 2025 February Submission1037 Authors

12 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models have limitations in maintaining up-to-date knowledge and preventing hallucinations. To address these issues, recent research has explored integrating external knowledge sources into language models, with Knowledge Graphs emerging as a particularly promising approach since their structured and factual nature. However, effectively incorporating knowledge graphs into language models remains challenging due to the modality gap and the lack of query-aware knowledge selection in existing knowledge-to-text methods. This paper proposes a Knowledge Graph to Knowledge-Augmented Prompt (KG2P), a framework that optimizes knowledge graph-to-text transformation for language model prompting. KG2P introduces black-box optimization to systematically learn effective knowledge transformation and query-aware alignment to enhance relevance. Unlike previous approaches that rely on rigid linearization or static human annotations, KG2P dynamically adapts knowledge augmentation to improve reasoning in language models. Experimental results on knowledge graph question-answering benchmarks demonstrate that KG2P consistently outperforms existing methods. The findings suggest that task-specific optimization is essential for effectively incorporating structured knowledge into language models, providing a new direction for knowledge-augmented prompting.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: knowledge base QA
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 1037
Loading