Parameter-efficient Prompt Tuning and Hierarchical Textual Guidance for Few-shot Whole Slide Image Classification

Published: 26 Apr 2026, Last Modified: 26 Apr 2026Med-Reasoner 2026 PosterEveryoneRevisionsCC BY 4.0
Keywords: Whole Slide Image Classification, Vision Language Models, Few-shot Learning
TL;DR: This paper introduces a new method with parameter-efficient prompt tuning and soft hierarchical textual guidance based WSI representation learning for few-shot WSI classification.
Abstract: Whole Slide Images (WSIs) are giga-pixel in scale and are typically partitioned into small instances in WSI classification pipelines for computational feasibility. However, obtaining extensive instance level annotations is costly, making few-shot weakly supervised WSI classification (FSWC) crucial for learning from limited slide-level labels. Recently, pre-trained vision-language models (VLMs) have been adopted in FSWC, yet they exhibit several limitations. Existing prompt tuning methods in FSWC substantially increase both the number of trainable parameters and inference overhead. Moreover, current methods discard instances with low alignment to text embeddings from VLMs, potentially leading to information loss. To address these challenges, we propose two key contributions. First, we introduce a new parameter efficient prompt tuning method by scaling and shifting features in text encoder, which significantly reduces the computational cost. Second, to leverage not only the pre-trained knowledge of VLMs, but also the inherent hierarchical structure of WSIs, we introduce a WSI representation learning approach with a soft hierarchical textual guidance strategy without utilizing hard instance filtering. Comprehensive evaluations on pathology datasets covering breast, lung, and ovarian cancer types demonstrate consistent improvements up-to 10.9\%, 7.8\%, and 13.8\% respectively, over the state-of-the-art methods in FSWC. Our method reduces the number of trainable parameters by 18.1\% on both breast and lung cancer datasets, and 5.8\% on the ovarian cancer dataset, while also excelling at weakly-supervised tumor localization.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 2
Loading