TTOM: Test-Time Optimization and Memorization for Compositional Video Generation

ICLR 2026 Conference Submission3050 Authors

08 Sept 2025 (modified: 29 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Text-to-Video Generation, Test-Time Optimization, Memory
TL;DR: We propose a latent memory–based test-time optimization framework that improves compositional text-to-video generation by dynamically adapting to unseen prompts during inference.
Abstract: Video Foundation Models (VFMs) exhibit remarkable visual generation performance, but struggle in compositional scenarios (\eg, motion, numeracy, and spatial relation). In this work, we introduce **Test-Time Optimization and Memorization (TTOM)**, a training-free framework that aligns VFM outputs with spatiotemporal layouts during inference for better text-image alignment. Rather than direct intervention to latents or attention per-sample in existing work, we integrate and optimize new parameters guided by a general layout-attention objective. Furthermore, we formulate video generation within a streaming setting, and maintain historical optimization contexts with a parametric memory mechanism that supports flexible operations, such as insert, read, update, and delete. Notably, we found that TTOM disentangles compositional world knowledge, showing powerful transferability and generalization. Experimental results on the T2V-CompBench and Vbench benchmarks establish TTOM as an effective, practical, scalable, and efficient framework to achieve cross-modal alignment for compositional video generation on the fly.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 3050
Loading