Abstract: Space-time video super-resolution aims to simultaneously increase the space-time resolution of low-resolution and low frame-rate videos. Existing deep learning-based methods have made notable strides, predominantly achieving space-time video super-resolution through the relatively simple integration of modules for video super-resolution and video frame interpolation sub-tasks. However, these methods typically do not fully exploit the inherent relationships between the two sub-tasks. To address this limitation, we propose a Complementary Dual-Branch Network designed to better explore the interdependence of the two sub-tasks. Specifically, our dual-branch architecture facilitates mutual enhancement between video super-resolution and video frame interpolation sub-tasks within each branch and provides mutual guidance between the two branches. Additionally, we introduce a simple yet effective strategy for the rough estimation of optical flow, incorporating Flow-Guided Deformable Alignment into space-time video super-resolution to achieve precise motion estimation. In addition, we use an RNN-based Backward and Forward Recurrent module to ensure that all frames can utilize the information of the whole sequence. It is more efficient and memory saving compared to the currently popular bidirectional LSTM module. Experimental results on several datasets show that our method achieves superior accuracy and requires fewer parameters compared to state-of-the-art methods.
Loading