GraphARC: A Comprehensive Benchmark for Graph-Based Abstract Reasoning

ICLR 2026 Conference Submission16970 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: few-shot, abstract reasoning, graph, scalability, benchmark, ARC, reasoning models, composational generalization
TL;DR: GraphARC introduces a benchmark for few-shot abstract reasoning on graphs, generalizing ARC-style tasks beyond grids to scalable, graph-structured data.
Abstract: Relational reasoning lies at the heart of intelligence, but existing benchmarks are typically confined to formats such as grids or text. We introduce GraphARC, a benchmark for abstract reasoning on graph-structured data. GraphARC generalizes the few-shot transformation learning paradigm of the Abstraction and Reasoning Corpus (ARC). Each task requires inferring a transformation rule from a few input-output pairs and applying it to a new test graph, covering local, global, and hierarchical graph transformations. Unlike grid-based ARC, GraphARC instances can be generated at scale across diverse graph families and sizes, enabling systematic evaluation of generalization abilities. We evaluate state-of-the-art language models on GraphARC and observe clear limitations. Models can answer questions about graph properties but often fail to solve the full graph transformation task, revealing a comprehension-execution gap. Performance further degrades on larger instances, exposing scaling barriers. More broadly, by combining aspects of node classification, link prediction, and graph generation within a single framework, GraphARC provides a promising testbed for future graph foundation models.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 16970
Loading