Stress Testing Vision Transformers Using Common Histopathological ArtifactsDownload PDF

Published: 09 May 2022, Last Modified: 12 May 2023MIDL 2022 Short PapersReaders: Everyone
Keywords: Artifacts, Histopathology, Transformers, Robustness, Image Corruption
TL;DR: We stress-test Vision Transformers using 10 common histopathological artifacts and find that they are more robust than Convolutional Neural Networks
Abstract: Artifacts on digitized Whole Slide Images like blur, tissue fold, and foreign particles have been demonstrated to degrade the performance of deep convolutional neural networks (CNNs). For prospective deployment of deep learning models in computational histopathology, it is essential that the models are robust to common artifacts. In this work, we stress test multi-head self-attention based Vision Transformer models using 10 common artifacts and compare the performance to CNNs. We discovered that Transformers are substantially more robust to artifacts in histopathological images.
Registration: I acknowledge that acceptance of this work at MIDL requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: novel methodological ideas without extensive validation
Primary Subject Area: Application: Histopathology
Secondary Subject Area: Application: Other
Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
1 Reply

Loading