Keywords: Large Language Models, Geometry Spatial Representation, Procedural Geometry Reasoning
TL;DR: We introduce GeoGramBench, a new benchmark probing LLMs’ ability to translate procedural geometry code into internal spatial representations, revealing that current models struggle with this core aspect of spatial-symbolic reasoning.
Abstract: Geometric spatial reasoning forms the foundation of many applications in artificial intelligence, yet the ability of large language models (LLMs) to operate over geometric spatial information expressed in procedural code remains underexplored. In this paper, we address this gap by formalizing the \texttt{Program-to-Geometry} task, which challenges models to translate programmatic drawing code into accurate and abstract geometric reasoning. To evaluate this capability, we present \textbf{GeoGramBench}, a benchmark of 500 carefully refined problems organized by a tailored three-level taxonomy that considers geometric complexity rather than traditional mathematical reasoning complexity. Our comprehensive evaluation of 17 frontier LLMs reveals consistent and pronounced deficiencies: even the most advanced models achieve less than 50\% accuracy at the highest abstraction level. By systematically analyzing model behaviors, our study exposes key limitations in program-driven spatial reasoning and positions GeoGramBench as an important resource for benchmarking and advancing behavioral research in symbolic-to-spatial geometric reasoning.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 15617
Loading