Complex scene video frames alignment and multi-frame fusion deraining with deep neural network

Published: 01 Jan 2023, Last Modified: 08 Apr 2025Neural Comput. Appl. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Rainy is one of the most common weather. It is difficult to get clear and accurate background information when shooting outdoors in the rain. At the same time, these videos often have complex background depth and highly dynamic scenes. To address this problem, we propose a two-stage method for video deraining through adjacent frames alignment and multi-frame fusion. For adjacent frames, especially for large complex dynamic scenes, we combine optical flow and superpixel matching to achieve fine-grained alignment of scene content at the pixel level and semantic level. Optical flow is used to preprocess global frame alignment. Meanwhile, considering that the image scene depth range is large, we segment the target deraining image into smaller perceptual units superpixel (SP). It can better align the scene content of adjacent frames and the content details are well preserved. Aligned adjacent frames serve as input for subsequent fusion deraining. In the multi-frame fusion stage, a deep multi-frame fusion deraining neural network is designed, which uses the temporal and spatial information of multiple frames to compensate and restore the details of the target deraining images, and outputs clean images. Experiments show that our method can achieve good rain removal results. Visual inspection showed that rain was better eliminated. Extensive experiments on a series of synthetic and real videos with rain streaks verify the superiority of the proposed method over previous state-of-the-art methods.
Loading