Submission Type: Full Papers (up to 8 pages)
Supplementary Material: pptx
Keywords: physically plausible video generation, representation alignment, self-similarity, spatio-temporal correspondence
TL;DR: We introduce Tempered Self-Similarity Alignment (TSA) loss that distills probabilistic temporal correspondences from visual foundation models to guide video diffusion models toward more realistic motion dynamics.
Abstract: Despite remarkable advances in video generative models, they still struggle to generate physically realistic videos, frequently exhibiting appearance drift, implausible motion, and temporal inconsistencies. In this work, we address this limitation by transferring relational knowledge encoded in spatio-temporal self-similarity (STSS) from visual foundation models into video generative models. STSS represents pairwise similarities among features across space and time, revealing the relational structure of how objects interact with other entities throughout a video, effectively capturing real-world dynamics, including object motion and semantic transformations. To transfer this relational knowledge, we propose Tempered Self-similarity Alignment (TSA) loss, which transforms STSS into probabilistic correspondence distributions and trains the video generative model to align its correspondence distributions with those of the visual foundation model on dynamically changing regions.Evaluated on VideoPhy and VideoPhy2 benchmarks, our method demonstrates substantial improvements in physical plausibility across diverse interaction scenarios, validating the effectiveness of transferring relational knowledge for physically realistic video generation.
Submission Number: 10
Loading