LTRAG: Enhancing autoformalization and self-refinement for logical reasoning with Thought-Guided RAG
Abstract: Logical reasoning is fundamental to intelligent systems.
Large language models (LLMs) have demonstrated promise in natural language (NL) reasoning, especially with techniques like chain-of-thought (CoT) prompting.
Neuro-symbolic methods like Logic-LM and LINC further enhance performance on challenging datasets FOLIO and AR-LSAT by integrating formalization with LLMs and symbolic solvers, and possibly refinement with LLMs.
However, these methods still struggle with the accurate formalization of complex NL problems.
In this paper, we introduce LTRAG, a framework to enhance autoformalization and self-refinement for logical reasoning with Retrieval-Augmented Generation (RAG), by building knowledge bases of thought-guided examples.
Experimental results on FOLIO and AR-LSAT show that LTRAG consistently outperforms Logic-LM and LINC across different models. On GPT-4 and AR-LSAT, it achieves an accuracy gain of 13\% over Logic-LM.
Paper Type: Short
Research Area: Machine Learning for NLP
Research Area Keywords: Language Modeling,Generation,Machine Learning for NLP,Question Answering
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 6896
Loading