Knowledge-guided adaptation of pathology foundation models effectively improves cross-domain generalization and demographic fairness
Abstract: Foundation models in computational pathology suffer from site-specific and demographic biases, which compromise their generalizability and fairness. We introduce FLEX, a framework that employs a task-specific information bottleneck, guided by visual and textual domain knowledge, to disentangle robust pathological features from these artifacts. Using three large cohorts (The Cancer Genome Atlas, Clinical Proteomic Tumor Analysis Consortium, and an in-house dataset) across 16 clinical tasks, totaling over 9,900 slides, we demonstrate that FLEX achieves superior zero-shot generalization to unseen external cohorts, significantly outperforming baselines and narrowing the performance gap between seen and unseen domains. A comprehensive fairness analysis confirms that FLEX also effectively mitigates disparities across demographic groups. Furthermore, its versatility and scalability are proven through compatibility with various foundation models and multiple-instance learning architectures. Our work establishes FLEX as a promising solution for developing more generalizable and equitable pathology AI for diverse clinical settings. Pathology foundation models can still struggle with generalizability and fairness across demographic groups. Here, the authors develop FLEX, a framework to enhance cross-domain generalization and demographic fairness in pathology foundation models, improving performance and mitigating disparities in cancer datasets.
External IDs:doi:10.1038/s41467-025-66300-y
Loading