Question-Aware Knowledge Graph Prompting for Large Language Models

24 Sept 2024 (modified: 22 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Knowledge Graph, Question Answering, Large Language Model, Prompt
Abstract: Large Language Models (LLMs) have demonstrated significant advancements in various natural language processing tasks, yet they often struggle with tasks that require external domain-specific knowledge, such as Multiple Choice Question Answering (MCQA). Integrating Knowledge Graphs (KGs) with LLMs has been explored as a solution to enhance LLMs' reasoning capabilities, while existing methods either involve computationally expensive finetuning processes or rely on the noisy retrieval of KG information. Recent efforts have focused on leveraging Graph Neural Networks (GNNs) to generate KG-based soft prompts for LLMs, which face challenges of lacking question-relevance assessment in GNN and utilization of relations among options. In this paper, we propose a novel approach, QAP, to address these challenges by optimizing the utilization of KG in MCQA tasks. Our method introduces question embeddings into the GNN aggregation process, enabling the model to assess the relevance of KG information based on the question context. Additionally, QAP facilitates inter-option interactions by employing an attention module that explicitly models relationships between answer options. Specifically, we use multiple attention heads for the GNN output, allowing the model to capture and compare features across different options, thereby enhancing cross-option reasoning. Our approach not only enhances the connection between GNNs and LLMs but also enables the model to better utilize the relationships between answer options. Experimental results demonstrate that QAP outperforms state-of-the-art models on multiple public MCQA datasets, validating its effectiveness and scalability.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3317
Loading