Learning Cross-Video Neural Representations for High-Quality Frame InterpolationOpen Website

2022 (modified: 09 Nov 2022)ECCV (15) 2022Readers: Everyone
Abstract: This paper considers the problem of temporal video interpolation, where the goal is to synthesize a new video frame given its two neighbors. We propose C ross-Video Ne u ral Re presentation (CURE) as the first video interpolation method based on neural fields (NF). NF refers to the recent class of methods for neural representation of complex 3D scenes that has seen widespread success and application across computer vision. CURE represents the video as a continuous function parameterized by a coordinate-based neural network, whose inputs are the spatiotemporal coordinates and outputs are the corresponding RGB values. CURE introduces a new architecture that conditions the neural network on the input frames for imposing space-time consistency in the synthesized video. This not only improves the final interpolation quality, but also enables CURE to learn a prior across multiple videos. Experimental evaluations show that CURE achieves the state-of-the-art performance on video interpolation on several benchmark datasets. (This work was supported by CCF-2043134.)
0 Replies

Loading