Democratizing Pathology Co-Pilots: An Open Pipeline and Dataset for Whole-Slide Vision-Language Modelling
Keywords: Instruction-tuning, visual question-answering, whole-slide images, digital assistant
Abstract: Vision-language models (VLMs) have the potential to become co-pilots for pathologists. However, most VLMs either focus on small regions of interest within whole-slide images, provide only static slide-level outputs, or rely on data that is not publicly available, limiting reproducibility. Furthermore, training data containing WSIs paired with detailed clinical reports is scarce, restricting progress toward transparent and generalisable VLMs. We address these limitations with three main contributions. First, we introduce _Polysome_, a standardised tool for synthetic instruction generation. Second, we apply Polysome to the public HISTAI dataset, generating _HISTAI-Instruct_, a large whole-slide instruction tuning dataset spanning 24,259 slides and over 1.1 million instruction-response pairs. Finally, we use HISTAI-Instruct to train _ANTONI-$\alpha$_, a VLM capable of visual-question answering (VQA). We show that ANTONI-$\alpha$ outperforms MedGemma on WSI-level VQA tasks of tissue identification, neoplasm detection, and differential diagnosis. We also compare the performance of multiple incarnations of ANTONI-$\alpha$ trained with different amounts of data. All methods, data, and code are publicly available
Primary Subject Area: Generative Models
Secondary Subject Area: Application: Histopathology
Registration Requirement: Yes
Reproducibility: https://github.com/computationalpathologygroup/ANTONI-Alpha
Read CFP & Author Instructions: Yes
Originality Policy: Yes
Single-blind & Not Under Review Elsewhere: Yes
LLM Policy: Yes
Submission Number: 305
Loading