Abstract: Retrieval-augmented generation (RAG) improves the performance of LLM-based applications by incorporating information from external knowledge bases. However, the introduction of search engines and knowledge sources can also introduce new biases and stereotypes into the system. Previous studies have shown that adjusting the bias of retrievers through fine-tuning can influence the overall bias of the RAG system, mitigating bias in RAG. In this work, we propose a re-ranking-based method, termed ReFaRAG, as an alternative to fine-tuning for controlling the bias in retrieval results. We further investigate how biased retrieval output affects different LLMs within the RAG framework.
External IDs:doi:10.1007/978-3-032-05727-3_42
Loading