Making DenseNet Interpretable: A Case Study in Clinical RadiologyDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Model Interpretation, Medical Image Analysis, Deep Learning
Abstract: The monotonous routine of medical image analysis under tight time constraints has always led to work fatigue for many medical practitioners. Medical image interpretation can be error-prone and this can increase the risk of an incorrect procedure being recommended. While the advancement of complex deep learning models has achieved performance beyond human capability in some computer vision tasks, widespread adoption in the medical field has been held back, among other factors, by poor model interpretability and a lack of high-quality labelled data. This paper introduces a model interpretation and visualisation framework for the analysis of the feature extraction process in a deep convolutional neural network and applies it to abnormality detection using the musculoskeletal radio-graph dataset (MURA, Stanford). The proposed framework provides a mechanism for interpreting DenseNet deep learning architectures. It aims to provide a deeper insight about the paths of feature generation and reasoning within a DenseNet architecture. When evaluated on MURA at abnormality detection tasks, the model interpretation framework has been shown capable of identifying limitations from the reasoning of a DenseNet architecture applied to radiography, which can in turn be ameliorated through model interpretation and visualization.
Code: https://bitbucket.org/cityunilondon/iclr2020/src/master/
Original Pdf: pdf
5 Replies

Loading