Event-based Motion Deblurring with Modality-Aware Decomposition and Recomposition

Published: 26 Oct 2023, Last Modified: 26 Jul 2025ACMMM 2023EveryoneCC BY 4.0
Abstract: Event cameras, offering visual information with microsecond accuracy and having strong robustness against motion blur, provide a new perspective to address motion deblurring. How to effectively exploit the collaboration of events and images for motion deblurring is a challenging endeavor. Existing event-based motion deblurring methods perform cross-modal fusion with modality-specific features (complementarity), while ignoring features shared by modalities (correlation), which may lead to insufficient fusion of event and image, resulting in limited performance. To address the above issues, following the idea of divide and conquer, we tackle the challenge in modeling cross-modality fusion with the modality-specific and modality-shared features decomposition and recomposition. Therefore, we propose a novel event-image fusion network (EIFNet) based on modality-aware decomposition and recomposition. Specifically, in the decomposition stage, modality-shared and modalityspecific feature separation clues are inferred in parallel by exploring the global correlation of common-mode, differential-mode and two modalities with dual cross-attention. In the recomposition stage, the divided modality-shared and modality-specific features are merged with bi-directional supplement information exchanging via longrange interaction. Extensive experiments demonstrate that our method outperforms state-of-the-art event-driven and image-only methods.
Loading