Vision Transformers Show Improved Robustness in High-Content Image Analysis

Published: 01 Jan 2022, Last Modified: 14 Oct 2025SDS 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In drug development, image-based bioassays are commonplace, typically run in high throughput on automated microscopes. The resulting cell imaging data comes from multiple instruments and has been acquired at different time points, leading to technical and biological variation in the data, potentially hampering the quantitative analysis across an assay campaign. In this work, we analyze the robustness of a novel concept called Vision Transformers with respect to technical and biological variations. We compare their performance to recent analysis concepts by benchmarking the Cells Out of Sample dataset (COOS) from a high-content imaging screen. The experiments suggest that Vision Transformers are capable of learning more robust representations, thereby even outperforming specially designed deep learning architectures by a large margin.
Loading