ViSAGE: Constructing Self-Correcting Memories for Long-Form Video Understanding

ACL ARR 2026 January Submission1841 Authors

31 Dec 2025 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Long-Form Video Understanding, Multimodal Agents, Agentic Memory, Self-Correcting Memory, Entity-Centric Reasoning
Abstract: Multimodal agents operating in long-horizon environments must build and continually update memory to support entity-consistent, temporally grounded reasoning. However, existing agentic memory approaches often discard fine-grained identity cues under aggressive compression and segment-wise processing. They also over-trust vector-similarity retrieval, surfacing semantically related yet identity-mismatched evidence and thereby causing identity errors and hallucinations. We propose ViSAGE, a multimodal agentic memory framework that constructs self-correcting, entity-centric memories. Specifically, ViSAGE anchors entity identity via cross-modal binding over long temporal ranges. It then applies bidirectional memory refinement to propagate delayed identity evidence, retroactively unifying historical records and stabilizing future reasoning. We also introduce multi-agent cross-verification to adjudicate retrieved evidence under an identity--evidence alignment constraint, enabling verified refusals instead of hallucinations when evidence is missing. Extensive results demonstrate that ViSAGE consistently outperforms the strongest baseline, achieving 5.9\% higher accuracy.
Paper Type: Long
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Research Area Keywords: Vision and Language, Video Understanding, Language Grounding to Vision, Embodied AI, Agents, Question Answering, Factuality and Hallucination
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 1841
Loading