Abstract: Glaucoma remains one of the leading causes of irreversible blindness, its timely detection being imperative to avoiding permanent visual impairment. Deep learning methods offer a solution for early detection of Glaucoma by reducing the need for manual labor at screening stages. Hence, numerous automated methods have been proposed to assist experts in diagnosing Glaucoma from fundus images. However, the sole focus on increasing the accuracy of predictions has resulted in a lack of trust due to the blackbox nature of such models. Similar sentiment across multiple high-stakes decision domains has led to a growing demand for replacing black-box models with glass-box ones. In this work, we propose an inherently explainable model that 1.) learns class-specific prototypes, which capture the general characteristics or concepts of the pathology, 2.) uses the actual visualized prototypes in the decision-making process by computing the similarity between them and the query image, as a result revealing the underlying model’s reasoning process, 3) is end-to-end optimizable. Moreover, the proposed approach does not require joint training of the classification models with decoders for visualization of the prototypes, simplifying the overall training process. Experimental results demonstrate that our proposed approach achieves comparable performance with its black-box counterparts and outperforms the state-of-the-art baseline, both quantitatively and qualitatively, on the benchmark RIMONE DL dataset.
Loading