Long Context Modeling with Ranked Memory-Augmented Retrieval

ACL ARR 2025 July Submission509 Authors

28 Jul 2025 (modified: 20 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Effective long-term memory management is crucial for language models handling extended contexts. We introduce the Enhanced Ranked Memory Augmented Retrieval ERMAR framework, which dynamically ranks memory entries based on relevance. Unlike prior models, ERMAR employs a novel relevance scoring mechanism and a pointwise re-ranking model for key-value embeddings, inspired by learning-to-rank techniques in information retrieval. By integrating historical usage patterns and adaptive retrieval, ERMAR achieves state-of-the-art results on standard benchmarks, demonstrating superior scalability and performance in long-context tasks.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: retrieval-augmented models, dense retrieval, re-ranking, memory augmented retrieval
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 509
Loading