Keywords: Retrieval-Augmented Generation, KV caching, Efficient inference, Parallel decoding, Context-aware decoding
Abstract: Retrieval Augmented Generation faces a trade-off: concatenating documents in a long prompt enables multi-document reasoning but creates prefill bottlenecks, while encoding document KV caches separately offers speed but breaks cross-document interaction. We propose Parallel Context-of-Experts Decoding (Pced), a training-free framework that shifts evidence aggregation from the attention mechanism to the decoding. Pced treats retrieved documents as isolated "experts", synchronizing their predictions via a novel retrieval-aware contrastive decoding rule that weighs expert logits against the model prior. This approach recovers cross-document reasoning capabilities without constructing a shared attention across documents.
Paper Type: Short
Research Area: Retrieval-Augmented Language Models
Research Area Keywords: retrieval-augmented generation, LLM Efficiency, inference methods, parallel decoding, open-domain QA
Contribution Types: Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 6295
Loading