Graph Linearization Methods for Reasoning on Graphs with Large Language Models

ICLR 2026 Conference Submission17596 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Linearization, Large Language Models, Graph Reasoning, Token Sequences, Edge Ordering, Centrality, Degeneracy
TL;DR: The paper proposes graph linearization methods—based on node centrality, degeneracy, and relabeling—to transform graph structures into token sequences optimized for large language models.
Abstract: Large language models have evolved to process multiple modalities beyond text, such as images and audio, which motivates us to explore how to effectively leverage them for graph reasoning tasks. The key question, therefore, is how to transform graphs into linear sequences of tokens -a process we term graph linearization}-so that LLMs can handle graphs naturally. We consider that graphs should be linearized meaningfully to reflect certain properties of natural language text, such as local dependency and global alignment, in order to ease contemporary LLMs, trained on trillions of textual tokens, better understand graphs. To achieve this, we developed several graph linearization methods based on graph centrality and degeneracy. These methods are further enhanced using node relabeling techniques. The experimental results demonstrate the effectiveness of our methods compared to the random linearization baseline. Our work introduces novel graph representations suitable for LLMs, contributing to the potential integration of graph machine learning with the trend of multimodal processing using a unified transformer model.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 17596
Loading