MMDRFuse: Distilled Mini-Model with Dynamic Refresh for Multi-Modality Image Fusion

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In recent years, Multi-Modality Image Fusion (MMIF) has been applied to many fields, which has attracted many scholars to endeavour to improve the fusion performance. However, the prevailing focus has predominantly been on the architecture design, rather than the training strategies. As a low-level vision task, image fusion is supposed to quickly deliver output images for observing and supporting downstream tasks. Thus, superfluous computational and storage overheads should be avoided. In this work, a lightweight Distilled Mini-Model with a Dynamic Refresh strategy (MMDRFuse) is proposed to achieve this objective. To pursue model parsimony, an extremely small convolutional network with a total of 113 trainable parameters (0.44 KB) is obtained by three carefully designed supervisions. First, digestible distillation is constructed by emphasising external spatial feature consistency, delivering soft supervision with balanced details and saliency for the target network. Second, we develop a comprehensive loss to balance the pixel, gradient, and perception clues from the source images. Third, an innovative dynamic refresh training strategy is used to collaborate history parameters and current supervision during training, together with an adaptive adjust function to optimise the fusion network. Extensive experiments on several public datasets demonstrate that our method exhibits promising advantages in terms of model efficiency and complexity, with superior performance in multiple image fusion tasks and downstream pedestrian detection application.
Primary Subject Area: [Content] Multimodal Fusion
Secondary Subject Area: [Experience] Multimedia Applications
Relevance To Conference: Multi-Modality Image Fusion (MMIF) is an important foundational task within the field of multimedia research. Our study creates a mini model with only 0.44 KB and a total of 113 parameters, requiring a computational complexity of only 0.14GFLOPs (Giga Floating Point Operations Per Second). On average, it takes about 0.62ms to fuse a pair of images. Therefore, we can perform multimodal fusion at an extremely low cost of storage, computation, and time, while enhancing downstream detection tasks. Experiments have proven that our fusion performance surpasses many SOTA methods currently available. Besides, our method is the first multimodal fusion solution to have a model size below 1KB, paving the way for the improvement of practical multimodal processing.
Submission Number: 2478
Loading