Keywords: Memory, AI, LLM
Abstract: Memory mechanisms have become essential components in advanced AI architectures, sig- nificantly impacting performance, efficiency, and adaptability across diverse domains. This survey presents a unified theoretical framework for analyzing memory systems through three complementary lenses: retrieval mechanisms, memory structures, and update schemas. We systematically examine memory implementations across four key domains: Large Language Models (LLMs), Vision-Language Models (VLMs), Visual Prompt Tuning (VPT), and Video Understanding systems.
Our analysis reveals both universal memory patterns that transcend domains and domain- specific optimizations that address unique challenges in each field. We identify significant evolutionary trends, including the increasing prevalence of hybrid retrieval approaches, progression toward sophisticated hierarchical memory structures, and development of multi- factor update schemas that balance stability with adaptability. Through cross-domain comparisons, we identify transferable principles, highlight remaining challenges, and propose promising research directions for next-generation memory systems. These include theoretical frameworks for memory capacity optimization, cognitive-aligned architectures, cross-modal knowledge abstraction, and privacy-preserving memory systems. This comprehensive analysis provides valuable insights for researchers working on memory-enhanced AI systems across diverse application domains, offering both theoretical foundations and practical design considerations.
Submission Number: 6
Loading