Self-Aligned Video Deraining with Transmission-Depth Consistency
Abstract: In this paper, we address the problem of rain streaks
and rain accumulation removal in video, by developing
a self-alignment network with transmission-depth consistency. Existing video based deraining methods focus only
on rain streak removal, and commonly use optical flow to
align the rain video frames. However, besides rain streaks,
rain accummulation can considerably degrade visibility;
and, optical flow estimation in a rain video is still erroneous, making the deraining performance tend to be inaccurate. Our method employs deformable convolution layers in our encoder to achieve feature-level frame alignment,
and hence avoids using optical flow. For rain streaks, our
method predicts the current frame from its adjacent frames,
such that rain streaks that appear randomly in the temporal domain can be removed. For rain accumulation, our
method employs a transmission-depth consistency loss to
resolve the ambiguity between the depth and water-droplet
density. Our network estimates the depth from consecutive rain-accumulation-removal outputs, and calculates the
transmission map using a commonly used physics model.
To ensure photometric-temporal and depth-temporal consistencies, our method estimates the camera poses, so that
it can warp one frame to its adjacent frames. Experimental
results show that our method is effective in removing both
rain streaks and rain accumulation, outperforming those of
state-of-the-art methods quantitatively and qualitatively.
0 Replies
Loading