Keywords: Graph Learning; Large Language Models
TL;DR: In this work, we proposed GRIP, which inject graph knowledge base into LoRA parameter such that LLMs can inference on the graph without the need to provide explicit graph context.
Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities in modeling sequential textual data and generalizing across diverse tasks. However, adapting LLMs to effectively handle structural data, such as graphs, remains a challenging problem. Some approaches adopt complex strategies to convert graphs into text sequences, resulting in significant token overhead and rendering them impractical for large-scale graphs. Others introduce additional modules to encode graphs into fixed-size token representations for LLMs. However, these methods typically require large-scale fine-tuning and complex alignment procedures, yet often yield sub-optimal results due to poor modality alignment. Inspired by in-parameter knowledge injection for test-time adaptation of LLMs, we propose GRIP, a novel framework that equips LLMs with the ability to internalize complex relational information from graphs through carefully designed fine-tuning tasks. This knowledge is efficiently stored within lightweight LoRA parameters, enabling the fine-tuned LLM to perform a wide range of graph-related tasks \textbf{without requiring access to the original graph} at inference time. Extensive experiments across multiple benchmarks validate the effectiveness and efficiency of our approach.
Submission Number: 57
Loading