Abstract: In this paper, we propose a novel method of explainable deep learning for COVID-19 detection in Chest X-ray (CXR) images, employing our Texture-Shape DTCWT approach. The method extracts texture information from the first convolution layer and shape features from the last convolution layer of Deep Neural Networks (CNN, VGG-16 and ResNet50) using DTCWT in Highpasses component for shape features and lowpass component texture features. A guided filter-based method is used to fuse the extracted texture and shape maps to generate the final explainable map. Our approach is flexible and can probably be adapted to work with traditional explainability methods commonly used in the literature. We verify the effectiveness of our approach using a widely recognized COVID19 database that includes chest X-ray (CXR) images. Through comprehensive experiments, we evaluate its performance in comparison to established explainability methods like Feature Extraction Map (FEM). In the evaluation process, we compute metrics such as the average drop % and increase confidence. The results obtained indicate that incorporating texture and shape information through DTCWT (Dual-Tree Complex Wavelet Transform) results in markedly enhanced explainability when compared to conventional methods.
External IDs:dblp:conf/isivc/MohamadiH24
Loading