KO-RAG : Knowledge Organization for Retrieval Augmented Large Language Models with Individual-then-Integrated Feedback

ACL ARR 2024 August Submission69 Authors

12 Aug 2024 (modified: 03 Sept 2024)ACL ARR 2024 August SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Retrieval-augmented large language models have shown remarkable potential in knowledge-intensive tasks. However, their performance can be compromised by lengthy, noisy, or irrelevant retrieved information. Recent work focus on knowledge compression, but ignore the feedback from LLM or just incorporate with individual feedback. In this paper, we introduce \textbf{KO-RAG}, a knowledge organization method with an external knowledge organization model for retrieval augmented large language models, trained with individual-then-integrated feedback. KO-RAG learns to organize knowledge in a two-stage framework. In the individual feedback stage, our method ranks and filters knowledge by comparing each knowledge, which can measure the helpfulness of each knowledge individually. In the integrated feedback stage, our method organizes the knowledge integratedly by utilizing LLM's feedback on sampled knowledge permutations. Moreover, we design an empty knowledge placeholder to make KO-RAG organize knowledge dynamically. Evaluation on five open-domain question-answering datasets proves that the proposed method has significantly improves the LLMs' performance, outperforming the baseline methods.
Paper Type: Long
Research Area: Generation
Research Area Keywords: retrieval-augmented generation, text-to-text generation
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 69
Loading