LEARNING TO GENERATE FORMALLY VERIFIABLE STEP-BY-STEP LOGIC REASONING VIA STRUCTURED FORMAL INTERMEDIARIES

16 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: GRPO, Formal Proof, LLM, Chain of Thought
TL;DR: Formally verifiable logic reasoning by constructing JSON-like reasoning steps
Abstract: Large language models (LLMs) have recently demonstrated impressive performance on complex, multi-step reasoning tasks, especially when post-trained with outcome-rewarded reinforcement learning. However, it has been observed that outcome rewards often overlook flawed intermediate steps, leading to unreliable reasoning steps —even when final answers are correct. To address this unreliable reasoning steps, we propose ProSFI (Process Reward over Structured Formal Intermediates), a novel reward method that enhances reasoning reliability without compromising accuracy. Instead of generating formal proofs directly, which a modest-size (7B) model can rarely do successfully, the model outputs structured intermediate steps aligned with its natural language reasoning. Each step is then verified by a formal prover. Only fully validated reasoning chains receive high rewards. The integration of formal verification guides the model towards generating step-by-step machine-checkable proofs and hence yields more credible final answers. ProSFI offers a simple and effective approach to training trustworthy reasoning models.
Supplementary Material: pdf
Primary Area: foundation or frontier models, including LLMs
Submission Number: 6919
Loading