Keywords: Blur-to-Video, Motion deblurring, Encoder-Decoder
TL;DR: We present a novel unified architecture that restores video frames from a single motion-blurred image in an end-to-end manner.
Abstract: We propose a novel framework to generate clean video frames from a single motion-blurred image.
While a broad range of literature focuses on recovering a single image from a blurred image, in this work, we tackle a more challenging task i.e. video restoration from a blurred image. We formulate video restoration from a single blurred image as an inverse problem by setting clean image sequence and their respective motion as latent factors, and the blurred image as an observation. Our framework is based on an encoder-decoder structure with spatial transformer network modules to restore a video sequence and its underlying motion in an end-to-end manner. We design a loss function and regularizers with complementary properties to stabilize the training and analyze variant models of the proposed network. The effectiveness and transferability of our network are highlighted through a large set of experiments on two different types of datasets: camera rotation blurs generated from panorama scenes and dynamic motion blurs in high speed videos. Our code and models will be publicly available.
Original Pdf: pdf
4 Replies
Loading