Abstract: Retrieval-augmented generation (RAG) is a mainstream method for improving performance on knowledge-intensive tasks. However, current RAG systems often place too much emphasis on retrieved contexts. This can lead to reliance on inaccurate sources and overlook the model's inherent knowledge, especially when dealing with misleading or excessive information. To resolve this imbalance, we propose Knowledgeable-r1, an reinforcement learning (RL) framework that can dynamically select, combine, and utilize parametric and contextual knowledge. Unlike existing methods that rely on complex prompting or training pipelines, Knowledgeable-r1 employs adaptive prompts to elicit diverse knowledge preferences, then optimizes responses through reward-driven trajectories. Experiments show that Knowledgeable-r1 significantly enhances robustness and reasoning accuracy in both parameters and contextual conflict tasks and general RAG tasks, especially outperforming baselines by 17.07% in counterfactual scenarios and demonstrating consistent gains across RAG tasks.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: LLM RAG
Languages Studied: English
Submission Number: 7968
Loading