BiNeXt-SMSMVL: A Structure-Aware Multi-Scale Multi-View Learning Network for Robust Fundus Multi-Disease Classification
Abstract: Multiple ocular diseases frequently coexist in fundus images, while image quality is highly susceptible to imaging conditions and patient cooperation, often manifesting as blurring, underexposure, and indistinct lesion regions. These challenges significantly hinder robust multi-disease joint classification. To address this, we propose a novel framework, BiNeXt-SMSMVL (Bilateral ConvNeXt-based Structure-aware Multi-scale Multi-view Learning Network), that integrates structural medical biomarkers with deep semantic image features for robust multi-class fundus disease recognition. Specifically, we first employ automatic segmentation to extract the optic disc/cup and vascular structures, calculating medical biomarkers such as vertical/horizontal cup-to-disc ratio (CDR), vessel density, and fractal dimension as structural priors for classification. Simultaneously, a ConvNeXt-Tiny backbone extracts multi-scale visual features from raw fundus images, enhanced by SENet channel attention mechanisms to improve feature representation. Architecturally, the model performs independent predictions on left-eye, right-eye, and fused binocular images, leveraging multi-view ensembling to enhance decision stability. Structural priors and image features are then fused for joint classification modeling. Experiments on public datasets demonstrate that our model maintains stable performance under variable image quality and significant lesion heterogeneity, outperforming existing multi-label classification methods in key metrics including F1-score and AUC. Also, our approach exhibits strong robustness, interpretability, and clinical applicability.
External IDs:doi:10.3390/electronics14234564
Loading