Flare Removal with Visual Prompt

22 Sept 2024 (modified: 13 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Flare removal, Visual Prompt, Prompt Inpainting Pipeline
TL;DR: We propose a model-agnostic pipeline named Prompt Inpainting Pipeline (PIP) to suppress the artifacts created during the flare removal process.
Abstract: Flare removal methods remove the streak, shimmer, and reflective flare in flare-corrupted images while preserving the light source. Recent deep learning methods focus on flare extraction and achieve promising results. They accomplish the task by either viewing the flare equals to the residual information between the flare-corrupted image and the flare-free image and generating the flare-free image through subtracting the extracted flare image or generating the flare-free image and the flare image simultaneously. However, due to the gap between the flare image and the residual information and handling flare extraction and clear image generation process simultaneously will give the network too much pressure and cannot fully utilize the extracted flare, these methods tend to generate images with severe artifacts. To alleviate such a phenomenon, we propose a model-agnostic pipeline named Prompt Inpainting Pipeline (PIP). Specifically, instead of viewing the gap between the flare-free and flare corrupted image as the flare or generating the flare-free image and flare image simultaneously, our prompt inpainting pipeline provides a novel perspective. We borrow the idea from inpainting methods and remove the flare by masking the polluted area and rewriting image details within. Unlike inpainting methods, we first extract multi-scale features of flare-corrupted images as a visual prompt and rewrite missing textures with the visual prompt since we find out that directly writing the missing details based on the remaining area hardly generates promising image details with sufficient semantic and high-frequency information. To verify the function of our pipeline, we conduct comprehensive experiments and demonstrate its superiority.
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2585
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview