Hierarchical Attention Generates Better Proofs

ACL ARR 2025 February Submission3646 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) have shown promise in formal theorem proving, but their token-level processing often fails to capture the inherent hierarchical nature of mathematical proofs. We introduce \textbf{Hierarchical Attention}, a regularization method that aligns LLMs' attention mechanisms with mathematical reasoning structures. Our approach establishes a five-level hierarchy from foundational elements to high-level concepts, ensuring structured information flow in proof generation. Experiments demonstrate that our method improves proof success rates by 2.05\% on miniF2F and 1.69\% on ProofNet while reducing proof complexity by 23.81\% and 16.50\% respectively. The code and models will be available.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: graph-based methods, structured prediction, representation learning
Contribution Types: NLP engineering experiment
Languages Studied: Lean4
Submission Number: 3646
Loading