R2SFD: Improving Single Image Reflection Removal using Semantic Feature Dictionary

Published: 20 Jul 2024, Last Modified: 06 Aug 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Single image reflection removal is a severely ill-posed problem and it is very hard to separate the desirable transmission and undesirable reflection layers. Most of the existing single image reflection removal methods try to recover the transmission layer by exploiting cues that are extracted only from the given input image. However, there is abundant unutilized information in the form of millions of reflection free images available publicly. Even though this information is easily available, utilizing the same for effectively removing reflections is non-trivial. In this paper, we propose a novel method, termed R2SFD, for improving single image reflection removal using a Semantic Feature Dictionary (SFD) constructed from a database of reflection-free images. The SFD is constructed using a novel Reflection Aware Feature Extractor (RAFENet) that extracts features invariant to the presence of reflections. The SFD and the input image are then passed to another novel network termed SFDNet. This network first extracts RAFENet features from the reflection-corrupted input image, searches for similar features in the SFD, and transfers the semantic content to generate the final output. To further improve reflection removal, we also introduce a Large Scale Reflection Removal (LSRR) dataset consisting of 2650 image pairs comprising of a variety of real world reflection scenarios. The proposed method achieves superior results both qualitatively and quantitatively compared to the state of the art single image reflection removal methods on real public datasets as well as our LSRR dataset.We will release the dataset at https://github.com/ee19d005/r2sfd.
Primary Subject Area: [Experience] Multimedia Applications
Secondary Subject Area: [Generation] Generative Multimedia
Relevance To Conference: In this paper, we introduce a novel method for removing reflections from images. This work will be extremely useful as a building block in several multimedia applications such as video quality enhancement. The problem of reflection removal is a complex task, posing challenges due to the intricate interplay between desirable transmission and undesirable reflection layers. Existing methods focus on cues extracted solely from the input image, overlooking the wealth of information present in the form of publicly available reflection-free images. Our paper introduces a novel deep learning based technique, wherein we propose to automatically extract useful semantic cues from an external database of reflection-free images to improve reflection removal. Our method can also be easily extended to an interactive application, where a user can manually provide one or more images as the external database, from which the network can extract useful semantic information. We also introduce a new large-scale dataset to further the research in reflection removal. Our experimental results validate the effectiveness of our method, showcasing its superiority over existing state-of-the-art approaches. We believe that the inclusion of this work in ACMMM will provide valuable insights and inspire further research in the field.
Supplementary Material: zip
Submission Number: 4008
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview