M2CD: A Unified MultiModal Framework for Optical-SAR Change Detection With Mixture of Experts and Self-Distillation

Ziyuan Liu, Jiawei Zhang, Wenyu Wang, Yuantao Gu

Published: 01 Jan 2025, Last Modified: 06 Nov 2025IEEE Geoscience and Remote Sensing LettersEveryoneRevisionsCC BY-SA 4.0
Abstract: Most existing change detection (CD) methods focus on optical images captured at different times, and deep learning (DL) has achieved remarkable success in this domain. However, in extreme scenarios such as disaster response, synthetic aperture radar (SAR), with its active imaging capability, is more suitable for providing postevent data. This introduces new challenges for CD methods, as existing weight-sharing Siamese networks struggle to effectively learn the cross-modal data distribution between optical and SAR images. To address this challenge, we propose a unified multimodal CD framework, M2CD. We integrate mixture of experts (MoEs) modules into the backbone to explicitly handle diverse modalities, thereby enhancing the model’s ability to learn multimodal data distributions. Additionally, we innovatively propose an optical-to-SAR path (O2SP) and implement self-distillation during training to reduce the feature space discrepancy between different modalities, further alleviating the model’s learning burden. We design multiple variants of M2CD based on both CNN and transformer backbones. Extensive experiments validate the effectiveness of the proposed framework, with the MiT-b1 version of M2CD outperforming all state-of-the-art (SOTA) methods in optical-SAR CD tasks. Code is available at https://github.com/circleLZY/M2CD
Loading