Keywords: Large Language Models (LLMs), Graph Neural Networks (GNNs), Zero-Shot Learning, Message Passing
Abstract: Graph-structured data is ubiquitous across scientific and industrial domains, making tasks such as node classification, edge prediction, and graph classification fundamental in modern machine learning. Graph Neural Networks (GNNs) have emerged as the dominant framework for these tasks, leveraging message passing algorithms to propagate information across nodes and learn expressive representations. However, performing zero-shot learning on graphs—where the model must generalize to unseen tasks or labels without additional training—remains highly challenging due to the structural complexity and relational dependencies within graphs.
Recent efforts have explored using Large Language Models (LLMs) for zero-shot reasoning on graphs by converting graph structures into textual descriptions. While promising, these methods face significant limitations due to the restricted context window of LLMs and the risk of hallucinations, especially when processing dense or large-scale graphs.
In this paper, we propose Large Language Model Graph Message Passing (\MethodShortName), a novel framework designed to address the zero-shot learning problem on graphs. Our method combines the scalability of message passing with the reasoning capabilities of LLMs: rather than exchanging vector embeddings as in traditional GNNs, nodes exchange task-aware textual messages, enabling the LLM to explore the graph level by level in a structured, interpretable manner tailored to the downstream task.
By aligning graph exploration with the LLM's strengths in language-based inference, our approach achieves strong zero-shot performance across a range of graph-based tasks, demonstrating the potential of LLM-driven message passing as a powerful alternative to standard graph representation learning methods.
Submission Number: 15
Loading