Abstract: Accurate segmentation of retinal blood vessels is crucial for diagnosing and treating various ophthalmic conditions. Although existing deep learning methods have shown promising results, background pixel noise in images often affects segmentation outcomes to varying degrees. Currently, effectively utilizing features from different domains within an image remains an underexplored yet promising research direction. In this paper, we propose a novel dual-branch network architecture, termed LIONet, which extracts features from RGB images and Local Intensity Order Transform (LIOT) images in two separate branches, and merges them in a Cross-Domain Fusion Module (CDFM). This approach effectively combines the global color information from the RGB domain and the local intensity variation information from the LIOT domain within retinal vessel images. Additionally, we integrate a Residual Excitation Module (REM) into the down-sampling layers of each branch to enhance feature representation and reduce the impact of redundant and irrelevant information. Detailed result analysis and comparisons on three publicly available retinal vessel datasets demonstrate the effectiveness of our approach compared to state-of-the-art methods.
Loading