A Multi-task learning model with low-level feature sharing and inter-feature guidance for segmentation and classification of medical images

Published: 01 Jan 2024, Last Modified: 06 Jun 2025BIBM 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Medical image segmentation and classification are both crucial components of computer-aided diagnosis, and past studies have identified the inherent correlations between them in various cases. Numerous multi-task models have been developed, with most leveraging shared features extracted by a feature extractor to address both tasks. However, few have paid attention to the feature differences between them which may be obvious when the segmentation object area is larger than the classification object area. To address the issue, we introduce a model named SCMTL-LSFG, which employs a low-level feature sharing and high-level inter-feature guidance strategy. SCMTL-LSFG comprises a segmentation branch and a classification branch, which share a low-level feature extraction component but have two separate high-level feature extraction components. Since there are correlations between the two tasks, SCMTL-LSFG leverages the segmentation component to guide the classification component in high-level feature extraction by the Inter-Feature Guidance module we design. The evaluation conducted on a public breast ultrasound image dataset and a COVID-19 chest X-ray image dataset indicates that SCMTL-LSFG effectively improves classification accuracy. Our experiment results also demonstrated that SCMTL-LSFG significantly outperforms three state-of-art similar models in both tasks.
Loading