Enhancing Physician Flexibility: Prompt-Guided Multi-class Pathological Segmentation for Diverse Outcomes

Published: 25 Sept 2024, Last Modified: 21 Oct 2024IEEE BHI'24EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Medical image segmentation, Renal pathology, Visual-language model
TL;DR: This paper presents a language-guided model for pathology image segmentation, tested on a multitask renal pathology segmentation dataset. It also explores enhancing model flexibility by using free-text prompts, compared to traditional task ID tokens.
Abstract: The Vision Foundation Model has recently gained attention in medical image analysis. Its zero-shot learning capabilities accelerate AI deployment and enhance the generalizability of clinical applications. However, segmenting pathological images presents a special focus on the flexibility of segmentation targets. For instance, a single click on a Whole Slide Image (WSI) could signify a cell, a functional unit, or layers, adding layers of complexity to the segmentation tasks. Current models primarily predict potential outcomes but lack the flexibility needed for physician input. In this paper, we explore the potential of enhancing segmentation model flexibility by introducing various task prompts through a Large Language Model (LLM), compared to traditional task ID tokens.. Our contribution is four-fold: (1) we construct a computational-efficient pipeline that uses finetuned language prompts to guide flexible multi-class segmentation; (2) We compare segmentation performance with fixed prompts against free-text; (3) We design a multi-task kidney pathology segmentation dataset and the corresponding various free-text prompts; and (4) We evaluate our approach on the kidney pathology dataset, assessing its capacity to new cases during inference.
Track: 7. Digital radiology and pathology
Registration Id: 9DNR3HVMG33
Submission Number: 321
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview