Editing the Moving World: Model Editing for Video LLMs

ACL ARR 2025 May Submission4842 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Model Editing, also known as knowledge editing, is receiving increasing attention in the field of Large Language Models (LLMs). However, existing model editing approaches predominantly focus on knowledge-level or static visual domains, overlooking dynamic semantics. This paper exploratively applies four representative model editing methods (FT, IKE, MEND, and SERAC) to Video Large Language Models (Vid-LLMs) and introduces the first benchmark specifically designed for Vid-LLMs editing—$\textbf{VMEB}$ ($\textbf{V}$id-LLMs $\textbf{M}$odel $\textbf{E}$diting $\textbf{B}$enchmark)—systematically extending model editing research from static modalities to dynamic video scenarios. In the video paradigm, our evaluation dimensions encompass traditional metrics including Reliability, Locality, and Generality, while also introducing a video-specific metric: Robustness. Based on experimental results, we analyze the strengths and limitations of existing model editing approaches, among which MEND demonstrates superior performance, and identify new challenges and research directions for the future development of the model editing field.
Paper Type: Long
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Research Area Keywords: Model Editing, Knowledge Editing, Video-LLMs
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources, Data analysis
Languages Studied: English
Submission Number: 4842
Loading