JFFRA : Joint Flow and Feature Refinement Using Attention for Video Restoration

Ranjith Merugu, Mohammad Sameer Suhail, Akshay P. Sarashetti, Venkata Bharath Reddy Reddem, Pankaj Kumar Bajpai, Amit Satish Unde

Published: 2025, Last Modified: 30 Mar 2026ICCVW 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recent advancements in video restoration have focused on recovering high-quality video frames from low-quality inputs. Compared with static images, the performance of video restoration significantly depends on the efficient exploitation of temporal correlations among successive video frames. Numerous techniques make use of temporal information via flow-based strategies or recurrent architectures. However, these methods often encounter difficulties in preserving temporal consistency as they utilize degraded input video frames. To resolve this issue, we propose a novel video restoration framework named Joint Flow and Feature Refinement using Attention (JFFRA). The proposed JFFRA is based on the key philosophy of iteratively enhancing data through the synergistic collaboration of flow (alignment) and restoration. By leveraging previously enhanced features to refine flow and vice versa, JFFRA enables efficient feature enhancement using temporal information. This interplay between flow and restoration is executed at multiple scales, reducing the dependence on precise flow estimation. Moreover, we incorporate an occlusion-aware temporal loss function to enhance the network's capability in eliminating flickering artifacts. Comprehensive experiments demonstrate the effectiveness of JFFRA across various restoration tasks such as denoising, deblurring, and super-resolution. Our method achieves notable improvements in both quantitative metrics and perceptual quality over state-of-the-art approaches.
Loading