Impact of Layer Selection in Histopathology Foundation Models on Downstream Task Performance

Published: 27 Apr 2024, Last Modified: 26 May 2024MIDL 2024 Short PapersEveryoneRevisionsBibTeXCC BY 4.0
Keywords: computational pathology, self-supervised pre-training, foundation models, layer selection
Abstract: Self-supervised vision transformer models trained on large histopathology datasets are increasingly used as feature encoders for downstream tasks. However, their final layer might not be optimal for all tasks due to the mismatch between the pre-training and downstream objectives. We investigate the influence of the layer selection in five public, transformer-based histopathology encoders on downstream task performance both on patch- and slide- level. Our results demonstrate that choosing a different layer for feature encoding can lead to performance improvements up to eleven percent depending on the task and the model.
Submission Number: 83
Loading