Enhancing Physician Flexibility: Prompt-Guided Multi-class Pathological Segmentation for Diverse Outcomes
Keywords: Medical image segmentation, Renal pathology, Visual-language model
TL;DR: This paper presents a language-guided model for pathology image segmentation, tested on a multitask renal pathology segmentation dataset. It also explores enhancing model flexibility by using free-text prompts, compared to traditional task ID tokens.
Abstract: The Vision Foundation Model has recently gained attention in medical image analysis. Its zero-shot learning capabilities accelerate AI deployment and enhance the generalizability of clinical applications. However, segmenting pathological images presents a special focus on the flexibility of segmentation targets. For instance, a single click on a Whole Slide Image (WSI) could signify a cell, a functional unit, or layers, adding layers of complexity to the segmentation tasks. Current models primarily predict potential outcomes but lack the flexibility needed for physician input. In this paper, we explore the potential of enhancing segmentation model flexibility by introducing various task prompts through a Large Language Model (LLM), compared to traditional task ID tokens.. Our contribution is four-fold: (1) we construct a computational-efficient pipeline that uses finetuned language prompts to guide flexible multi-class segmentation; (2) We compare segmentation performance with fixed prompts against free-text; (3) We design a multi-task kidney pathology segmentation dataset and the corresponding various free-text prompts; and (4) We evaluate our approach on the kidney pathology dataset, assessing its capacity to new cases during inference.
Track: 7. Digital radiology and pathology
Registration Id: 9DNR3HVMG33
Submission Number: 321
Loading