GeometryZero: Advancing LLM Geometry Solving via Group Contrastive Policy Optimization

18 Sept 2025 (modified: 24 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Geometry, Reinforcement Learning, Large Language Model
Abstract: Recent advances in large language models (LLMs) have demonstrated remarkable capabilities across diverse domains, particularly in mathematical reasoning, amid which geometry problem solving remains a challenging area where auxiliary construction plays a enssential role. Existing approaches either achieve suboptimal performance or rely on colossal LLMs (e.g., GPT-4o), incurring massive computational costs. We posit that reinforcement learning with verifiable reward (e.g., GRPO) offers a promising direction for training smaller models that effectively combine auxiliary construction with robust geometric reasoning. However, directly applying GRPO to geometric reasoning presents fundamental limitations due to its dependence on unconditional rewards, which leads to indiscriminate and counterproductive auxiliary constructions. To address these challenges, we propose Group Contrastive Policy Optimization (**GCPO**), a novel reinforcement learning framework featuring two key innovations: (1) *Group Contrastive Masking*, which adaptively provides positive or negative reward signals for auxiliary construction based on contextual utility, and a (2) *Length Reward* that promotes longer reasoning chains. Building on GCPO, we develop GeometryZero, a family of affordable-size geometric reasoning models that judiciously determine when to employ auxiliary construction. Our extensive empirical evaluation across popular geometric benchmarks (w.r.t. Geometry3K and MathVista) demonstrates that GeometryZero models consistently outperform RL baselines (e.g. GRPO, ToRL) across various benchmarks.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 11432
Loading