A generalizable pathology foundation model using a unified knowledge distillation pretraining framework

Jiabo Ma, Zhengrui Guo, Fengtao Zhou, Yihui Wang, Yingxue Xu, Jinbang Li, Fang Yan, Yu Cai, Zhengjie Zhu, Cheng Jin, Yi Lin, Xinrui Jiang, Chenglong Zhao, Danyi Li, Anjia Han, Zhenhui Li, Ronald Cheong Kin Chan, Jiguang Wang, Peng Fei, Kwang-Ting Cheng et al. (3 additional authors not shown)

Published: 02 Sept 2025, Last Modified: 12 Nov 2025Nature Biomedical EngineeringEveryoneRevisionsCC BY-SA 4.0
Abstract: The generalization ability of foundation models in the field of computational pathology (CPath) is crucial for their clinical success. However, current foundation models have only been evaluated on a limited type and number of tasks, leaving their generalization ability unclear. We establish a comprehensive benchmark to evaluate the performance of off-the-shelf foundation models across six distinct clinical task types, encompassing a total of 72 specific tasks. Our findings reveal that existing foundation models excel at certain task types but struggle to effectively handle the full breadth of clinical tasks. To improve the generalization of pathology foundation models, we propose a unified knowledge distillation framework consisting of both expert and self knowledge distillation, where the former allows the model to learn from the knowledge of multiple expert models, while the latter leverages self distillation to enable image representation learning via local–global alignment. On the basis of this framework, we develop a Generalizable Pathology Foundation Model (GPFM). Evaluated on the established benchmark, GPFM achieves an average rank of 1.6, ranking first in 42 tasks, positioning it as a promising method for feature representation in CPath. Generalizable Pathology Foundation Model (GPFM) consolidates expertise from a variety of existing models for use in a broad spectrum of computational pathology tasks.
Loading