SeBIR: Semantic-guided burst image restoration

Published: 01 Jan 2025, Last Modified: 02 Mar 2025Neural Networks 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Burst image restoration methods offer the possibility of recovering faithful scene details from multiple low-quality snapshots captured by hand-held devices in adverse scenarios, thereby attracting increasing attention in recent years. However, individual frames in a burst typically suffer from inter-frame misalignments, leading to ghosting artifacts. Besides, existing methods indiscriminately handle all burst frames, struggling to seamlessly remove the corrupted information due to the neglect of multi-frame spatio-temporal varying degradation. To alleviate these limitations, we propose a general semantic-guided model named SeBIR for burst image restoration incorporating the semantic prior knowledge of Segment Anything Model (SAM) to enable adaptive recovery. Specifically, instead of relying solely on a single aligning scheme, we develop a joint implicit and explicit strategy that sufficiently leverages semantic knowledge as guidance to achieve inter-frame alignment. To further adaptively modulate and aggregate aligned features with spatio-temporal disparity, we elaborate a semantic-guided fusion module using the intermediate semantic features of SAM as an explicit guide to weaken the inherent degradation and strengthen the valuable complementary information across frames. Additionally, a semantic-guided local loss is designed to boost local consistency and image quality. Extensive experiments on synthetic and real-world datasets demonstrate the superiority of our method in both quantitative and qualitative evaluations for burst super-resolution, burst denoising, and burst low-light image enhancement tasks.
Loading