One-Prompt to Segment All Medical Images

Published: 01 Jan 2024, Last Modified: 18 Feb 2025CVPR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large foundation models, known for their strong zero-shot generalization, have excelled in visual and language applications. However, applying them to medical image segmentation, a domain with diverse imaging types and target labels, remains an open challenge. Current approaches, such as adapting interactive segmentation models like Segment Anything Model (SAM), require user prompts for each sample during inference. Alternatively, trans-fer learning methods like few/one-shot models demand la-beled samples, leading to high costs. This paper intro-duces a new paradigm toward the universal medical image segmentation, termed ‘One-Prompt Segmentation.’ One-Prompt Segmentation combines the strengths of one-shot and interactive methods. In the inference stage, with just one prompted sample, it can adeptly handle the un-seen task in a single forward pass. We train One-Prompt Model on 64 open-source medical datasets, accompanied by the collection of over 3,000 clinician-labeled prompts. Tested on 14 previously unseen datasets, the One-Prompt Model showcases superior zero-shot segmentation capabil-ities, outperforming a wide range of related methods. The code and data is released as https://github.com/KidsWithTokens/one-prompt.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview