Explainable Diagnosis of Melanoma Based on Localization of Clinical Indicators and Self-Supervised Learning
Abstract: Melanoma is a prevalent lethal type of cancer that is treatable if diagnosed at early stages of development. Skin lesions are a typical warning signs for diagnosing melanoma at early stage, but they often led to delayed diagnosis due to high similarities of cancerous and benign lesions at early stages of melanoma.Deep learning (DL) has been used to classify skin lesion pictures with a high classification accuracy, but clinical adoption of DL for this task has been quite limited. A major reason is that the decision processes of DL models are often uninterpretable which makes them black boxes that are challenging to trust. We develop an explainable DL architecture for melanoma diagnosis. Our architecture segments input images and generates clinically interpretable melanoma indicator masks that are then used for classification. Since our architecture is trained to mimic expert dermatologists, it generates explainable decisions. We also benefit from self-supervised learning to address the challenge of data annotations which is often expensive and time-consuming in medical domains. Our experiments demonstrate that the proposed architectures matches clinical explanations considerably better than existing architectures and at the same time maintains high classification accuracies.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Wei_Liu3
Submission Number: 1902
Loading