Self-consistent deep approximation of retinal traits for robust and highly efficient vascular phenotyping of retinal colour fundus images

Published: 16 Jul 2024, Last Modified: 16 Jul 2024MICCAI Student Board EMERGE Workshop 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Retinal image analysis, Deep learning, Robustness
TL;DR: DARTv2 enhances retinal vascular phenotyping by improving upon the original DART model, offering increased robustness, repeatability, and efficiency while expanding its capabilities to include both Fractal Dimension and Vessel Density measurements.
Abstract: Retinal colour fundus images are a fast, low-cost, non-invasive way of imaging the retinal vasculature which could provide information about non-ocular, systemic health. Traditional approaches for retinal vascular phenotyping use handcrafted, multi-step pipelines that are computationally expensive and not robust to common quality issues. Recently, Deep Approximation of Retinal Traits (DART) was proposed which trains a neural network to mimic an existing pipeline in a more efficient and robust way. DART is orders of magnitude faster, more robust and repeatable. However, the original DART was not explicitly trained for repeatability, only provides a single retinal trait, Fractal Dimension (FD), and uses a limited set of augmentations. We propose DARTv2 that increases repeatability with a self-consistency loss, robustness with additional augmentations such as imaging overlays, and utility by adding Vessel Density (VD) as a second retinal trait in addition to FD. DARTv2 shows very high agreement (Pearson 0.9392 for FD and 0.9612 for VD, both $p<<0.05$) with AutoMorph, the pipeline it is based on. DARTv2 is far more robust than AutoMorph and also more robust than the original DART. Finally, DARTv2 is 200 times faster than AutoMorph and 4 times faster than the original DART, while taking up less storage space. DARTv2 will be made available to researchers upon publication.
Submission Number: 2
Loading