Beyond Static Retrieval Policies: Task-Aware Adaptive RAG With METAR

ICLR 2026 Conference Submission21800 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Adaptive Retrieval-augmented Generation, Agent
Abstract: Large Language Models (LLMs) accompanied by retrieval augmented generation (RAG) have been widely applied to knowledge-intensive tasks due to their strong generalization and contextual understanding capabilities. However, indiscriminate use of RAG can increase computational overhead and degrade performance. Adaptive RAG (ARAG), which dynamically determines whether to retrieve, has emerged as a promising solution. In current literature, ARAG methods typically rely on static decision policies, such as fixed confidence thresholds or task-specific prompts, which are brittle and lack adaptability to diverse task domains. Such task-brittleness leads to significant performance degradation when encountering an unseen task, hindering the real-world applicability of these methods. In this work, we formally define the problem of task adaptability for ARAG and introduce quantitative metrics to benchmark the current methods. To improve task adaptability, we propose METAR (Memory-Evolving Task-Aware RAG), a novel agentic framework where an agent learns and refines a procedural memory of task-specific retrieval criteria. Experiments across a wide range of tasks show that our method achieves superior task adaptability compared to existing ARAG approaches.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 21800
Loading