CIFF-Net: Contextual image feature fusion for Melanoma diagnosis

Published: 01 Jan 2024, Last Modified: 25 Aug 2024Biomed. Signal Process. Control. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Melanoma is considered to be the deadliest variant of skin cancer causing around 75% of total skin cancer deaths. To diagnose Melanoma, clinicians assess and compare multiple skin lesions of the same patient concurrently to gather contextual information regarding the patterns, and abnormality of the skin. So far this concurrent multi-image comparative method has not been explored by existing deep learning-based schemes. In response to this gap, a contextual image feature fusion (CIFF) based deep neural network (CIFF-Net) is proposed, which integrates patient-level contextual information into the traditional approaches for improved Melanoma diagnosis by concurrent multi-image comparative method. The proposed multi-kernel self attention (MKSA) module offers better generalization of the extracted features by introducing multi-kernel operations in the self attention mechanisms. To utilize both self attention and contextual feature-wise attention, an attention guided module named contextual feature fusion (CFF) is proposed that integrates extracted features from different contextual images into a single feature vector. Finally, in comparative contextual feature fusion (CCFF) module, primary and all contextual features are compared concurrently to generate comparative features mirroring the holistic assessment performed by clinicians. Significant improvement in performance has been achieved on the ISIC-2020 dataset over the traditional approaches with scores of 95.7% AUC and 98.3% Accuracy that validate the effectiveness of the proposed contextual learning scheme.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview