Glance for Context: Learning When to Leverage LLMs for Node-Aware GNN-LLM Fusion

Published: 26 Jan 2026, Last Modified: 11 Feb 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: GNN, Graph Learning, GNN-LLM, Homophily, Heterophily
TL;DR: We show GNNs and LLMs excel on different structures and train a selective router to query the LLM for nodes that GNNs are likely to struggle on.
Abstract: Learning on text-attributed graphs has motivated the use of Large Language Models (LLMs) for graph learning. However, most fusion strategies are applied uniformly across all nodes and attain only small overall performance gains. We argue this result stems from aggregate metrics that obscure when LLMs provide benefit, inhibiting actionable signals for new strategies. In this work, we reframe LLM–GNN fusion around nodes where GNNs typically falter. We first show that performance can significantly differ between GNNs and LLMs, with each excelling on distinct structural patterns, such as local homophily. To leverage this finding, we propose **GLANCE** (**G**NN with **L**LM **A**ssistance for **N**eighbor- and **C**ontext-aware **E**mbeddings), a framework that invokes an LLM to refine a GNN's prediction. GLANCE employs a lightweight router that, given inexpensive per-node signals, decides whether to query the LLM. Since the LLM calls are non-differentiable, the router is trained with an advantage-based objective that compares the utility of querying the LLM against relying solely on the GNN. Across multiple benchmarks, GLANCE achieves the best performance balance across node subgroups, achieving significant gains on heterophilous nodes (up to +5.8%) while simultaneously achieving top overall performance (up to +1.1%). Our findings advocate for adaptive, node-aware, GNN-LLM architectures, showing that selectively invoking the LLM where it adds value enables scalable application of LLMs to large graphs.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 12299
Loading