STD-Net: Spatio-Temporal Decomposition Network for Video Demoiréing With Sparse Transformers
Abstract: The problem of video demoiréing is a new challenge in video restoration. Unlike image demoiréing, which involves removing static and uniform patterns, video demoiréing requires tackling dynamic and varied moiré patterns while maintaining video details, colors, and temporal consistency. It is particularly challenging to model moiré patterns for videos with camera or object motions, where separating moiré from the original video content across frames is extremely difficult. Nonetheless, we observe that the spatial distribution of moiré patterns is often sparse on each frame, and their long-range temporal correlation is not significant. To fully leverage this phenomenon, a sparsity-constrained spatial self-attention scheme is proposed to concentrate on removing sparse moiré efficiently for each frame without being distracted by dynamic video content. The frame-wise spatial features are then correlated and aggregated via the local temporal cross-frame-attention module to produce temporal-consistent high-quality moiré-free videos. The above decoupled spatial and temporal transformers constitute the Spatio-Temporal Decomposition Network, dubbed STD-Net. For evaluation, we present a large-scale video demoiréing benchmark featuring various real-life scenes, camera motions, and object motions. We demonstrate that our proposed model can effectively and efficiently achieve superior performance on video demoiréing and single image demoiréing tasks. The proposed dataset is released at https://github.com/FZU-N/LVDM .
Loading