GNN-as-Judge: Unleashing the Power of LLMs for Graph Semi-Supervised Learning with GNN Feedback

Published: 26 Jun 2025, Last Modified: 15 Jul 2025MLoG-GenAI@KDD OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Graph Neural Networks, Graph Semi-supervised Learning
TL;DR: We propose GNN-as-Judge, a framework that leverages GNNs' feedback to select reliable pseudo-labels and a weakly supervised fine-tuning approach for tuning LLMs.
Abstract: Large Language Models (LLMs) have shown strong performance on text-attributed graphs (TAGs) due to their superior semantic understanding ability on textual node features. However, their effectiveness in the semi-supervised setting, where labeled nodes are rather limited, remains constrained since fine-tuning LLMs usually requires sufficient labeled data, especially when the TAG shows complex structural patterns. In essence, this paper targets two key challenges: (i) the difficulty of generating reliable pseudo labels on TAGs for LLMs, and (ii) the need to mitigate potential label noise when fine-tuning LLMs with pseudo labels. To counter the challenges, we propose a new framework, GNN-as-Judge, which can unleash the power of LLMs for semi-supervised learning on TAGs by incorporating the structural inductive bias of Graph Neural Networks (GNNs). Specifically, GNN-as-Judge introduces a collaborative pseudo-labeling strategy that exploits both the agreement and disagreement between LLMs and GNNs, and a weakly-supervised LLM fine-tuning algorithm that can distill the knowledge from informative pseudo labels while mitigating the potential label noise. Experiments on different TAG datasets demonstrate that GNN-as-Judge significantly outperforms existing methods, especially under low-resource regimes.
Submission Number: 17
Loading