Video Demoir´eing with Relation-Based Temporal ConsistencyDownload PDF

18 Nov 2022OpenReview Archive Direct UploadReaders: Everyone
Abstract: Moir´e patterns, appearing as color distortions, severely degrade image and video qualities when filming a screen with digital cameras. Considering the increasing demands for capturing videos, we study how to remove such undesirable moir´e patterns in videos, namely video demoir´eing. To this end, we introduce the first hand-held video demoir´eing dataset with a dedicated data collection pipeline to ensure spatial and temporal alignments of captured data. Further, a baseline video demoir´eing model with implicit feature space alignment and selective feature aggregation is developed to leverage complementary information from nearby frames to improve frame-level video demoir´eing. More importantly, we propose a relation-based temporal consistency loss to encourage the model to learn temporal consistency priors directly from ground-truth reference videos, which facilitates producing temporally consistent predictions and effectively maintains frame-level qualities. Extensive experiments manifest the superiority of our model. Code is available at https://daipengwa.github. io/VDmoire_ProjectPage/.
0 Replies

Loading