Boosting Large Language Model Reasoning with Theorem ProvingDownload PDF

Anonymous

16 Aug 2023ACL ARR 2023 August Blind SubmissionReaders: Everyone
Abstract: Large Language Models (LLMs) frequently face challenges with complex reasoning tasks. A recent structured AI methodology addresses this by distinctly dividing tasks into symbolic formalization, managed by LLMs, and problem-solving, conducted by symbolic solvers. While solvers like SymPy and Pyke prevent hallucinations, they often struggle with advanced reasoning tasks. This study addresses their limitations by leveraging the extensive reasoning data in Lean, a programming language for theorem proving. Training a custom model using Lean's rich theorem proving data greatly enhances our model's reasoning capacity, allowing it to outperform traditional solvers. We achieve state-of-the-art result on FOLIO, a logical reasoning dataset, indicating the potential of our method for natural language reasoning.
Paper Type: long
Research Area: Semantics: Sentence-level Semantics, Textual Inference and Other areas
0 Replies

Loading