Abstract: Mamba-based remote sensing segmentation has become a promising method for land cover classification, urban planning, and natural disaster assessment. However, the complex physical characteristics of the Earth’s surface and large-scale nature of remote sensing images present challenges, particularly in effectively capturing both global and local context and fine-grained details during the remote sensing segmentation. As an attempt to fill this gap, we develop a novel Global-Local Fused Mamba-Unet (GLFMamba-U). GLFMamba-U incorporates a Global-Local Fused VSS (GLFV) module to synergize global and local features and a Feedforward Head (FFH) to refine spatial representations. With these modules, GLFMamba-U facilitates detailed segmentation without sacrificing computational efficiency. Experimental results on LoveDA (54.47% mIoU, +1.12%) and ISPRS Vaihingen (84.84% mIoU, +1.41%) show that GLFMamba-U outperforms state-of-the-art models. These results underscore GLFMamba-U’s efficacy in high-resolution remote sensing segmentation.
External IDs:dblp:conf/icann/LiuHZLZLDZ25
Loading