DRR-RAG: Decompose and Refine Reasoning in Retrieval-Augmented Generation for Multi-hop Question Answering
Abstract: Retrieval-augmented generation systems are effective in addressing hallucinations and domain-specific challenges in vertical domains. However, these systems often struggle to fully utilize the language capabilities of large language models (LLMs) in handling complex questions that require matching relevant documents from different sources and managing intricate dependencies. In this paper, we introduce a novel framework Decompose and Refine Reasoning Retrieval-Augmented Generation that leverages the power of LLMs to decompose complex queries and efficiently manage the relationships between sub-questions, enhancing document retrieval and addressing multi-hop question challenges. We conduct experiments using a local model, a closed-source model with
prompt, and fine-tuning. Through extensive experimentation on diverse multi-hop datasets, we demonstrate that our approach not only outperforms existing methods in handling complex queries and improving retrieval performance, but also proves to be effective, easy to implement, and highly usable. These results highlight the robustness and practicality of our framework.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: Question Answering,NLP Applications, Efficient/Low-Resource Methods for NLP
Contribution Types: NLP engineering experiment, Reproduction study, Approaches to low-resource settings, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: english
Submission Number: 3905
Loading