RAG-Logic: Enhance Neuro-symbolic Approaches for Logical Reasoning with Retrieval-augmented Generation

ACL ARR 2024 June Submission3962 Authors

16 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Deductive reasoning over complex natural language poses significant challenges, necessitating the integration of large language models (LLMs). Benchmarks like ProofWriter and FOLIO highlight these challenges and demonstrate the need for advanced reasoning methods. Current approaches range from direct reasoning methods like zero-shot, few-shot, and chain-of-thought learning to hybrid models integrating LLMs with symbolic solvers. However, these methods often rely on static examples, limiting their adaptability. This paper introduces RAG-Logic, a dynamic example-based framework using Retrieval-Augmented Generation (RAG), which enhances LLMs' logical reasoning capabilities by providing contextually relevant examples. This approach conserves resources by avoiding extensive fine-tuning and reduces the propensity for hallucinations in traditional models. Our results across the ProofWriter and FOLIO datasets demonstrate the effectiveness of our framework, marking an advancement in logical reasoning tasks.
Paper Type: Short
Research Area: Machine Learning for NLP
Research Area Keywords: knowledge-augmented methods,generative models,transfer learning / domain adaptation,representation learning,few-shot learning
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings, Publicly available software and/or pre-trained models, Data analysis
Languages Studied: English
Submission Number: 3962
Loading