GNNs as Adapters for LLMs on Text-Attributed Graphs

Published: 23 Jan 2024, Last Modified: 23 May 2024TheWebConf24EveryoneRevisionsBibTeX
Keywords: Text-attributed graph; graph neural network; language model
Abstract: Text-attributed Graphs (TAGs), which interlace textual information with graph structures, pose unique challenges and opportunities for joint text and graph modeling. Recently, large language models (LLMs) have greatly advanced the generative and predictive power of text modeling. However, existing research on jointly modeling text and graph structures either incurs high computational costs or offers limited representational power. In this work, we propose GraphAdapter to harness the power of the LLM without fine-tuning its weights on Text-Attributed Graphs. Given a TAG, an adapter GNN is trained to reduce the LLM's error in predicting the next word of text sequences on nodes. Once trained, this GNN adapter can be seamlessly fine-tuned for various downstream tasks. Through extensive node classification experiments across multiple domains, GraphAdapter demonstrates an average improvement of 5\% while being more computationally efficient than baselines. We further validate its effectiveness with various language models, including RoBERTa, GPT-2, and Llama 2.
Track: Graph Algorithms and Learning for the Web
Submission Guidelines Scope: Yes
Submission Guidelines Blind: Yes
Submission Guidelines Format: Yes
Submission Guidelines Limit: Yes
Submission Guidelines Authorship: Yes
Student Author: Yes
Submission Number: 1885
Loading