LAMP: An LLM-based Message Passing Architecture for Text-Rich Graphs

08 Sept 2025 (modified: 06 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Text-Rich Graph; GNN; LLM;
TL;DR: We propose LAMP, a framework that internalizes message passing within LLM decoder layers while keeping raw node texts accessible at every propagation layer.
Abstract: Text-rich graphs, which integrate complex structural dependencies with abundant textual information, are ubiquitous yet remain challenging for existing learning paradigms. An ideal model must simultaneously satisfy **semantic fidelity** (reasoning over full raw text), **structural integrity** (faithful multi-hop propagation), and **computational scalability** (efficient handling of large neighborhoods). Current approaches inevitably compromise one of these aspects: GNN-based methods compress text into fixed embeddings, losing semantic detail; LLM-based methods serialize graphs into sequences, weakening structural reasoning; and recent "LLM-as-GNN" hybrids improve structural integrity but still bypass explicit reasoning on raw content. We introduce **LAMP**, an **L**LM-based **A**rchitecture for **M**essage **P**assing that overcomes this trade-off. LAMP reinterprets the stacking of decoders as message passing steps and adopts a dual-representation scheme: it anchors inference on each node’s raw text during each iteration while propagating compact summaries across neighbors. Furthermore, LAMP unifies discriminative (e.g., node classification) and generative (e.g., GraphQA) tasks under a single generative formulation, allowing end-to-end training without task-specific heads. Extensive experiments show that LAMP effectively unifies graph propagation and text reasoning, achieving competitive performance while offering new insights into the role of LLMs as general-purpose graph learners. *Code will be available upon publication.*
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 2983
Loading