LLM Enhancers for GNNs: An Analysis from the Perspective of Causal Mechanism Identification

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: This study analyzes the under-explored properties of LLM-enhanced GNNs using interchange interventions and proposes an optimization module to improve information transfer.
Abstract: The use of large language models (LLMs) as feature enhancers to optimize node representations, which are then used as inputs for graph neural networks (GNNs), has shown significant potential in graph representation learning. However, the fundamental properties of this approach remain underexplored. To address this issue, we propose conducting a more in-depth analysis of this issue based on the interchange intervention method. First, we construct a synthetic graph dataset with controllable causal relationships, enabling precise manipulation of semantic relationships and causal modeling to provide data for analysis. Using this dataset, we conduct interchange interventions to examine the deeper properties of LLM enhancers and GNNs, uncovering their underlying logic and internal mechanisms. Building on the analytical results, we design a plug-and-play optimization module to improve the information transfer between LLM enhancers and GNNs. Experiments across multiple datasets and models validate the proposed module.
Lay Summary: Graph neural networks (GNNs) are powerful tools for analyzing complex relationships in data, but improving how they represent these relationships is still an ongoing challenge. In this study, we explore how large language models (LLMs) can be used to enhance the features that GNNs rely on for better performance. We created a synthetic dataset with controlled causal relationships to better understand how LLMs and GNNs interact. By experimenting with these data, we were able to reveal hidden patterns in the way these two systems work together. Based on our findings, we developed a new optimization tool that helps improve the way information is shared between LLMs and GNNs. Our experiments show that this tool can significantly boost the performance of GNNs on various tasks. This work provides valuable insights into the inner workings of LLM-enhanced GNNs and presents a practical solution for improving their capabilities.
Link To Code: https://github.com/WX4code/LLMEnhCausalMechanism
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Large language models; Graph Neural Netwroks; Causal
Submission Number: 682
Loading