Scaling Test-Time Inference with Policy-Optimized, Dynamic Retrieval-Augmented Generation via KV Caching and Decoding
Keywords: Retrieval-Augmented Generation, Reinforcement Learning, Test- Time Inference Scaling, Memory-Efficient Inference, Question Answering, Knowledge-Intensive NL
Abstract: We present a comprehensive framework for enhancing Retrieval-Augmented Generation (RAG) systems through dynamic retrieval strategies and reinforcement fine-tuning. This approach significantly improves large language models on knowledge-intensive
tasks, including open-domain question answering and complex reasoning. Our framework integrates two complementary techniques:
Policy-Optimized Retrieval-Augmented Generation (PORAG), which optimizes the use of retrieved information, and Adaptive Token-
Layer Attention Scoring (ATLAS), which dynamically determines retrieval timing and content based on contextual needs. Together,
these techniques enhance both the utilization and relevance of retrieved content, improving factual accuracy and response quality.
Designed as a lightweight solution compatible with any Transformer-based LLM without requiring additional training, our framework ex-
cels in knowledge-intensive tasks, boosting output accuracy in RAG settings. We further propose CRITIC, a novel method to selectively
compress key-value caches by token importance, mitigating memory bottlenecks in long-context applications. The framework also
incorporates test-time scaling techniques to dynamically balance reasoning depth and computational resources, alongside optimized
decoding strategies for faster inference. Experiments on benchmark datasets show that our framework reduces hallucinations,
strengthens domain-specific reasoning, and achieves significant efficiency and scalability gains over traditional RAG systems. This
integrated approach advances the development of robust, efficient, and scalable RAG systems across diverse applications.
Submission Number: 8
Loading