Do LLMs Learn Graph Representations Without Context?

18 Sept 2025 (modified: 14 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Natural Language Processing, Linear Probing
Abstract: Large Language Models (LLMs) are trained on next-word prediction yet often appear to acquire structured knowledge beyond surface statistics. A central question is whether such internal representations emerge during zero shot learning without additional cues or only when explicit context is provided. We address this by training GPT-style models on paths sampled from synthetic and real-world graphs under two regimes: in-context learning, where subgraph information is provided, and zero shot learning, where only query nodes are given. We evaluate models through adjacency matrix reconstruction and linear probing of hidden activations. We find evidence that in-context learning models consistently recover graph structure and encode neighborhood information, while zero shot learning models fail to develop comparable representations.
Primary Area: interpretability and explainable AI
Submission Number: 10898
Loading