GaLoRA: Parameter-Efficient Graph-Aware LLMs for Node Classification

Published: 23 Sept 2025, Last Modified: 28 Oct 2025NPGML PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Text-Attributed Graphs, Graph Neural Networks, Large Language Models, Low-Rank Adaptation, Node Classification
TL;DR: We propose GaLoRA, a parameter-efficient framework for node classification on text-attributed graphs that injects GNN embeddings into an LLM using LoRA, achieving competitive performance with far fewer trainable parameters than full LLM fine-tuning.
Abstract: The rapid rise of large language models (LLMs) and their ability to capture semantic relationships has led to their adoption in a wide range of applications. Text-attributed graphs (TAGs) are a notable example where LLMs can be combined with Graph Neural Networks (GNNs) to improve the performance of node classification. In TAGs, each node is associated with textual content and such graphs are commonly seen in various domains such as social networks, citation graphs, recommendation systems, etc. Effectively learning from TAGs would enable better representations of both structural and textual representations of the graph and improve decision-making in relevant domains. We present GaLoRA, a parameter-efficient framework that integrates structural information into LLMs. GaLoRA demonstrates competitive performance on node classification tasks with TAGs, performing on par with state-of-the-art models with just 0.24\% of the parameter count required by full LLM fine-tuning. We experiment with three real-world datasets to showcase GaLoRA’s effectiveness in combining structural and semantical information on TAGs.
Submission Number: 116
Loading