everyone
since 24 Apr 2025">EveryoneRevisionsBibTeXCC BY 4.0
Temporal prediction of natural videos is inherently uncertain but explicit probabilistic modeling and inference suffer from statistical and computational challenges in high-dimensions. We describe an implicit regression-based framework for estimating and sampling the conditional density of the next frame in a video given previous observed frames. We show that sequence-to-image deep networks trained on a simple resilience-to-noise objective function extract adaptive representations for temporal prediction. Synthetic experiments demonstrate that this score-based framework can handle occlusion boundaries: unlike classical methods that average over bifurcating temporal trajectories, it chooses among likely trajectories, selecting more probable options with higher frequency. Furthermore, analysis of networks trained on natural videos reveals that the learned representations exploits spatio-temporal continuity and automatically weights predictive evidence by its reliability.