Abstract: Diabetic retinopathy (DR) is a blinding disease fraught with uncertainty and potential risks. Deep learning excel in automatic feature extraction and demonstrate high detection performance in the identification of diabetic retinopathy. However, the traditional models face the following challenges. Firstly, their computations can be intricate, leading to bloated models with a significant number of parameters. Secondly, when dealing with low-resolution feature maps, the model may be hindered in fully extracting crucial image features due to information loss incurred by convolution and pooling operations. To address these challenges, we introduce a novel lightweight network, MobileMSAA (Multi-Scale Attention Aggregation), which builds upon the foundation of MobileNetV3, boasting a remarkably small parameters and computational cost. Furthermore, it incorporates a multi-scale feature aggregation mechanism to enhance the model’s performance in dealing with low-resolution feature maps. We conducted comparative and ablation experiments on the APTOS 2019 diabetic retinopathy detection dataset. The experimental results demonstrate that our network significantly improves the perception of information across different scales, achieving an accuracy of 95.3% and a specificity of 95.1%, which highlighting the superiority of our approach compared with the existing models.And Fig. 1 shows that our model has the least parameters while obtains the best accuray. The source code for our model is available at: https://github.com/jihongxu/MSAA.
External IDs:dblp:conf/apweb/XiJHLZ24
Loading