PathMamba: Weakly Supervised State Space Model for Multi-class Segmentation of Pathology Images

Published: 01 Jan 2024, Last Modified: 13 Apr 2025MICCAI (8) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Accurate segmentation of pathology images plays a crucial role in digital pathology workflow. Fully supervised models have achieved excellent performance through dense pixel-level annotation. However, annotation on gigapixel pathology images is extremely expensive and time-consuming. Recently, the state space model with efficient hardware-aware design, known as Mamba, has achieved impressive results. In this paper, we propose a weakly supervised state space model (PathMamba) for multi-class segmentation of pathology images using only image-level labels. Our method integrates the standard features of both pixel-level and patch-level pathology images and can generate more regionally consistent segmentation results. Specifically, we first extract pixel-level feature maps based on Multi-Instance Multi-Label Learning by treating pixels as instances, which are subsequently injected into our designed Contrastive Mamba Block. The Contrastive Mamba Block adopts a state space model and integrates the concept of contrastive learning to extract non-causal dual-granularity features in pathological images. In addition, we suggest a Deep Contrast Supervised Loss to fully utilize the limited annotated information in weakly supervised methods. Our approach facilitates a comprehensive feature learning process and captures complex details and broader global contextual semantics in pathology images. Experiments on two public pathology image datasets show that the proposed method performs better than state-of-the-art weakly supervised methods. The code is available at https://github.com/hemo0826/PathMamba.
Loading