Keywords: Computed Tomography slices, Intracranial hemorrhage, CNN, ViT
TL;DR: This paper presents a successful implementation of the recently proposed vision Transformer using CNN feature maps as an input.
Abstract: We propose a feature generator backbone composed of an ensemble of convolutional neural networks (CNNs) to improve the recently emerging Vision Transformer (ViT) models. We tackled the RSNA intracranial hemorrhage classification problem, i.e., identifying various hemorrhage types from computed tomography (CT) slices. We show that by gradually stacking several feature maps extracted using multiple Xception CNNs, we can develop a feature-rich input for the ViT model. Our approach allowed the ViT model to pay attention to relevant features at multiple levels. Moreover, pretraining the ”n” CNNs using various paradigms leads to a diverse feature set and further improves the performance of the proposed n-CNN-ViT. We achieved a test accuracy of 98.04% with a weighted logarithmic loss value of 0.0708. The proposed architecture is modular and scalable in both the number of CNNs used for feature extraction and the size of the ViT.
Paper Type: methodological development
Primary Subject Area: Application: Radiology
Secondary Subject Area: Transfer Learning and Domain Adaptation
Paper Status: original work, not submitted yet
Source Code Url: For the short paper submission, we restrict to only present the method we have developed and the relevant results. The methodology is original and certainly could be optimized in our future work where we will release all the source codes.
Data Set Url: https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/data
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.