Abstract: Multispectral object detection has attracted increasing attention recently due to its superior detection capacity under various illumination conditions. The key challenge lies in the effective aggregation of multi-spectral features to derive highly discriminative representations. To address this challenge, we propose a novel Multidimensional Fusion Network (MMFN) to explore multi-modal information from local, global, and channel perspectives. Specifically, at the local level, local features of different modalities and their inter-relationships are captured by a window-shifted fusion. As a complement to the local information, we designed a global interaction module that facilitates the fusion of holistic, high-level semantic information spanning the entire image. We distillate the channel dependencies and complementarities between different modalities through cross-channel learning and generate the final fused representation. Comprehensive experiments conducted on three publicly available datasets provide compelling evidence validating the superiority of the proposed methodology. The results exhibit notable performance gains over state-of-the-art multispectral object detectors. Our code will be released.
Loading