A visual state space Model-Based Cross-Domain adaptive detection method for imbalanced medical image distribution

Published: 2025, Last Modified: 22 Jan 2026Appl. Intell. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Efficient adaptive detection of imbalanced medical images across domains remains a critical challenge in medical artificial intelligence. However, convolutional neural networks lack long-range dependency modeling, while transformers, despite their strengths, suffer from high computational costs due to their quadratic complexity. Recently, the visual state space model has gained attention for its efficient global modeling and linear computational complexity. Leveraging the principles of this model, we propose a new cross-domain adaptive detection method for imbalanced medical images, named medical adaptive mamba detection (Med-AMamDa). The method comprises two key components: (i) an adaptive selector that uses a sigmoid function to emphasize task-relevant features based on input characteristics, and (ii) a dual-branch block, termed the combined convolution and state space model (CoCoS) block, which captures local details and global context to enhance detection accuracy. We perform extensive experiments on four medical image datasets and compare Med-AMamDa with seven existing methods. Experimental results indicate that our method achieves competitive performance in adaptive detection of medical images with imbalanced data distribution.
Loading