CRANE: Reasoning with constrained LLM generation

Published: 06 Mar 2025, Last Modified: 17 Mar 2025ICLR 2025 Workshop VerifAI PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Constrained Decoding, LLM reasoning
TL;DR: Reasoning augmented grammar-constrained LLM generation
Abstract: Code generation, symbolic math reasoning, and other tasks require LLMs to pro- duce outputs that are both syntactically and semantically correct. Constrained LLM generation is a promising direction to enforce adherence to formal gram- mar, but prior works have empirically observed that strict enforcement of formal constraints often diminish the reasoning capabilities of LLMs. In this work, we first, provide a theoretical explanation for why constraining LLM outputs to very restrictive grammars that only allow syntactically valid final answers reduce the reasoning capabilities of the model. Second, we demonstrate that by augmenting the output grammar with carefully designed additional rules, it is always possible to preserve the reasoning capabilities of the LLM while ensuring syntactic and semantic correctness in its outputs. Building on these theoretical insights, we propose a reasoning-augmented constrained decoding algorithm, CRANE, which effectively balances the correctness of constrained generation with the flexibility of unconstrained generation. Experiments on multiple open-source LLMs and benchmarks show that CRANE significantly outperforms both state-of-the-art con- strained decoding strategies and standard unconstrained decoding, showing up to a 10% improvement over baselines on challenging symbolic reasoning benchmarks.
Submission Number: 32
Loading