Grammars of Formal Uncertainty: When to Trust LLMs in Automated Reasoning Tasks

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Formal Verification, Uncertainty Quantification, Stochastic Context-Free Grammars, Automated Reasoning, SMT-based Autoformalization, Selective Verification
TL;DR: SCFG-based framework characterizes uncertainty in LLM formal reasoning, enabling selective verification that reduces errors while minimally abstaining.
Abstract: Large language models (LLMs) show remarkable promise for democratizing automated reasoning by generating formal specifications. However, a fundamental tension exists: LLMs are probabilistic, while formal verification demands deterministic guarantees. This paper addresses this epistemological gap by comprehensively investigating failure modes and uncertainty quantification (UQ) in LLM-generated formal artifacts. Our systematic evaluation of five frontier LLMs reveals Satisfiability Modulo Theories (SMT) based autoformalization's domain-specific impact on accuracy (from +34.8\% on logical tasks to -44.5\% on factual ones), with known UQ techniques like the entropy of token probabilities failing to identify these errors. We introduce a probabilistic context-free grammar (PCFG) framework to model LLM outputs, yielding a refined uncertainty taxonomy. We find uncertainty signals are task-dependent (e.g., grammar entropy for logic, AUROC>0.93). Finally, a lightweight fusion of these signals enables selective verification, drastically reducing errors (14-100\%) with minimal abstention, transforming LLM-driven formalization into a reliable engineering discipline.
Supplementary Material: zip
Primary Area: Probabilistic methods (e.g., variational inference, causal inference, Gaussian processes)
Submission Number: 24258
Loading