Deep Annotated Learning, Harmonic Descriptors and Automated Diabetic Retinopathy DetectionDownload PDF

09 Apr 2018 (modified: 16 May 2018)MIDL 2018 Conference SubmissionReaders: Everyone
  • Abstract: Identifying candidate regions in medical images is of greatest importance since it provides intuitive illustrations for doctors and patients of how the diagnosis is inferred. Recently, advances in Deep Learning have dramatically improved the performance of automated Diabetic Retinopathy (DR) detection. Most of these Deep Learning systems treat Convolutional Neural Network (CNN) as a kind of black box, lacking comprehensive explanation. Our proposed system learns from image-level pre-processed annotations of DR highlighting suspicious regions through harmonic vasculature recosntruction. It mimics the expert process of a clinician examining an image by selecting regions showing high probability of being lesions. Then annotated images are passed to a CNN which in turn predicts their respective DR severity. Using annotated images for training/testing has also the clear advantage of increasing detection accuracy. On a clinical data sets of fully gradable images, the algorithm achieved an accuracy of 97.1% with a sensitivity of 96.6% and a specificity of 98.0% for an AUC value of 99.5%. On the publicly available Messidor-2 image dataset, sensitivity of 92.9% and specificity of 98.9% were achieved. For No DR vs. Non-Sight Threatening DR the accuracy was 89.5% with sensitivity of 87.5% and specificity of 91.8%. No DR vs. Sight Threatening DR achieved an accuracy of 97.9% with sensitivity of 97.9% and specificity of 98.4%. Meanwhile, for Non-Sight Threatening DR vs. Sight Threatening DR the accuracy was 79.7% with sensitivity of 77.7% and specificity of 81.7%.
  • Keywords: Automated Lesion Detection, Deep Learning, Harmonic Analysis
  • Author Affiliation: IRIS, Microsoft
4 Replies