Autoregressive Video Generation with Learnable Memory and Consistent Decoding

15 Sept 2025 (modified: 20 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: long video generation, lon-term memory, expose bias
TL;DR: We propose MemoryPack (efficient long/short-term context retrieval) and Direct Forcing (single-step train–inference alignment) to improve temporal consistency and reduce error accumulation in long-form video generation.
Abstract: Long-form video generation presents a dual challenge: models must capture long-range dependencies while preventing the error accumulation inherent in autoregressive decoding. To address these challenges, we make two contributions. First, for dynamic context modeling, we propose MemoryPack, a learnable context-retrieval mechanism that leverages both textual and image information as global guidance to jointly model short- and long-term dependencies, achieving minute-level temporal consistency. This design scales gracefully with video length, preserves computational efficiency, and maintains linear complexity. Second, to mitigate error accumulation, we introduce Direct Forcing, an efficient single-step approximating strategy that improves training–inference alignment and thereby curtails error propagation during inference. Together, MemoryPack and Direct Forcing substantially enhance the context consistency and reliability of long-form video generation, advancing the practical usability of autoregressive video models. Project website: https://anonymous.4open.science/w/ICLR2026-55FF
Primary Area: generative models
Submission Number: 5431
Loading