A Hierarchical Language Model for Interpretable Graph Reasoning
Abstract: We consider the emerging problem of solving graph reasoning tasks using language models (LMs). A challenge in processing large graphs is the excessively long descriptions of graphs used to prompt LMs. Additionally, LMs struggle to effectively capture structural information and provide interpretable results reflective of the graph's inherent nature. To solve these challenges, we propose a novel framework based on hierarchical language models for graph reasoning tasks. Our framework integrates both local and global graph information, allowing LMs to address various graph-related queries with high efficacy, efficiency, and interpretability. The proposed local-global scheme captures node-centric local information and interaction-centric global structure hierarchically, while reducing computational costs on large-scale graph tasks. Furthermore, we demonstrate the interpretability of our model using intrinsic attention weights and established explainers. Extensive experiments on seven graph reasoning datasets and seven real-world datasets, covering node, link, and graph level tasks, highlight the superiority of our method.
Loading