ECA: Efficient Continual Alignment for Open-Ended Image-to-Text Generation

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Incremental Leanrning, Vision-Language Model, Image-to-Text Generation
Abstract: Incremental Learning (IL) for Open-ended Image-to-Text Generation (OpenITG) enables models to continuously generate accurate, contextually relevant text for new images while preserving previously acquired knowledge. Unlike prior studies, this paper addresses a more practical scenario in which the predominant category of visual data shifts over time as environments evolve. In this context, we introduce a new notion of continual alignment, which incrementally adapts the alignment module within pre-trained VLMs to preserve high-quality cross-modal representations. Based on this idea, we propose **E**fficient **C**ontinual **A**lignment (ECA), a novel exemplar-free IL approach for OpenITG. The key challenge is enabling the model to acquire new, task-specific features while minimizing interference with the established alignment without accessing raw data from previous tasks. To address this, ECA employs three core mechanisms: a **M**ixture **o**f **Q**uery (MoQ) module that adapts task-specific query tokens, a **F**ish**e**r **D**ynamic **Ex**pansion (FeDEx) that dynamically expands model structure based on a Fisher Information Matrix (FIM)-based metric, and an embedding dictionary with **D**ictionary **R**eplay (DR) to retain past knowledge. To evaluate ECA's performance, we construct four new IL OpenITG benchmarks that better reflect real-world scenarios. Experimental results demonstrate that ECA significantly mitigates catastrophic forgetting and improves IL performance compared to baseline methods. Benchmarks are available at <https://anonymous.4open.science/r/ECA-ToS-Benchmarks-FB17>.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 15317
Loading