Keywords: evaluation framework, foundation models, computational pathology, oncology
Abstract: In computational pathology, self-supervised learning trained models surpass supervised ones in scale and performance. However, the benchmarking of these models remains a challenge due to the diversity in tasks and evaluation methods. To address this, we introduce eva (available at https://kaiko-ai.github.io/eva), an open-source framework for evaluating computational pathology foundation models (FMs). eva is designed to be modular and adaptable to both off-the-shelf and customized datasets, metrics, evaluation protocols and model architectures. We benchmark leading pathology FMs across diverse downstream classification tasks, establishing the first public reproducible pathology FM leaderboard and advocating for standardized FM evaluation practices.
Submission Number: 13
Loading