The Curious Case of the Unreliable Decipherer: What LLMs struggle with while explaining Formal Proofs
Abstract: Formal methods—such as symbolic proofs—offer a principled way to force Large Language Models (LLMs) to reason more reliably. However it is unclear whether LLMs can actually use these proofs to generate faithful, and yet human understandable explanations? We introduce ProofTeller, a new benchmark to evaluate this capability. On a new dataset with over 68,000 human-annotated tokens, we evaluate several LLMs on three tasks: identifying key proof steps, summarizing the proofs, and creating a user-targeted message. Across tasks and settings, both automated metrics and human evaluation reveal a critical reliability gap: LLMs over‑emphasize steps near the final conclusion, while humans draw on evidence distributed throughout the proof. These findings expose a fundamental mismatch between the reasoning strategies of current LLMs and those of humans, underscoring the need for approaches that enable LLMs to employ formal proof methods reliably while faithfully generating a comprehensible reasoning chain.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmark, LLM evaluation, reliability, faithfulness
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Reassignment Request Area Chair: This is not a resubmission
Reassignment Request Reviewers: This is not a resubmission
Software: zip
Data: zip
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: N/A
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: 3
B2 Discuss The License For Artifacts: Yes
B2 Elaboration: 3
B3 Artifact Use Consistent With Intended Use: Yes
B3 Elaboration: 4
B4 Data Contains Personally Identifying Info Or Offensive Content: No
B4 Elaboration: There is no personally identifying info or offensive content in any of the data used or created
B5 Documentation Of Artifacts: Yes
B5 Elaboration: 3
B6 Statistics For Data: Yes
B6 Elaboration: 3
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: 4
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: 4
C3 Descriptive Statistics: Yes
C3 Elaboration: 5
C4 Parameters For Packages: Yes
C4 Elaboration: 4
D Human Subjects Including Annotators: Yes
D1 Instructions Given To Participants: Yes
D1 Elaboration: 3/6/G
D2 Recruitment And Payment: Yes
D2 Elaboration: 3/6
D3 Data Consent: Yes
D3 Elaboration: G
D4 Ethics Review Board Approval: Yes
D4 Elaboration: Ethics statement
D5 Characteristics Of Annotators: Yes
D5 Elaboration: 3/6
E Ai Assistants In Research Or Writing: No
E1 Information About Use Of Ai Assistants: N/A
E1 Elaboration: N/A - We didn't use any.
Author Submission Checklist: yes
Submission Number: 26
Loading