Continuous Magnification Training Improves Embedding Quality in Histopathological Self-Supervised Learning
Keywords: Histopathological foundation models, Self-supervised learning, Digital pathology, Continuous magnification training, Magnification Robustness
TL;DR: Histopathology SSL models trained on discrete magnifications show systematic performance degradation at intermediate zoom levels, which we solve through continuous magnification sampling during training.
Track: Findings
Abstract: Current histopathological foundation models are trained on discrete standard microscope magnifications (0.25, 0.5, 1.0, 2.0 microns per pixel). We use the unsupervised RankMe metric to show that this can affect embedding space quality at magnifications outside their training distribution, with rank scores dropping at intermediate scales. We introduce continuous magnification training, where patches are sampled from a continuous distribution during training, and show that this eliminates the irregularities in the embedding space.
General Area: Applications and Practice
Specific Subject Areas: Medical Imaging, Representation Learning
PDF: pdf
Data And Code Availability: No
Ethics Board Approval: No
Entered Conflicts: I confirm the above
Anonymity: I confirm the above
Submission Number: 59
Loading