Abstract: Large language models (LLMs) frequently face challenges with complex logical reasoning tasks. We address these complexities with the help of Lean, a theorem proving framework. First, we formalize logical reasoning problems as theorems within Lean. We then proceed to either prove or disprove them. This methodology serves dual purposes: it eliminates the possibility of logical inconsistencies typical in LLM outputs and effectively manages complex logical reasoning tasks. Central to our approach are the numerous theorem proofs written in Lean, which encapsulate human logical reasoning. By training a model on this data, we apply the enhanced reasoning ability to tackle logic reasoning problems. Our approach achieves perfect accuracy on ProofWriter using reduced training data and achieves state-of-the-art performance on FOLIO, highlighting the potential of our method in logical reasoning tasks.
Paper Type: long
Research Area: Question Answering
Contribution Types: NLP engineering experiment
Languages Studied: English
0 Replies
Loading