FairSimCLR: A Fairness-Aware Contrastive Learning Framework for Demographic Bias Mitigation in Dermatology Imaging

Published: 22 Sept 2025, Last Modified: 22 Sept 2025WiML @ NeurIPS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Self-supervised learning, Medical Imaging, Contrastive learning, Representation Learning, AI Fairness
Abstract: Self-supervised learning (SSL) has shown strong potential for medical imaging, but most approaches do not explicitly address fairness, resulting in unequal performance across demographic subgroups. We introduce FairSimCLR, a fairness-aware extension of SimCLR designed to reduce representational bias while preserving diagnostic accuracy. FairSimCLR integrates group-aware sampling to ensure balanced representation of demographic subgroups and a fairness-regularized contrastive loss that penalizes disparities in learned embeddings. We pretrained both SimCLR and FairSimCLR on three dermatology datasets—PAD-UFES-20, Italian Dermatology, and Diverse Dermatology Images (DDI)—and evaluated the learned representations using frozen encoders with linear classifiers (logistic regression, random forest, and multi-layer perceptron [MLP]) to predict six diagnostic categories. Fairness was assessed using Demographic Parity Difference (DPD), Equal Opportunity Difference (EOD), and Predictive Equality Difference (PQD) across age, sex, and skin tone subgroups. Across all datasets and classifiers, FairSimCLR consistently reduced DPD, EOD, and PQD compared to the baseline SimCLR. Notably, the MLP classifier achieved the highest macro-averaged F1-score and AUC, confirming that FairSimCLR improves fairness without sacrificing predictive performance. This work highlights the importance of fairness-aware SSL in medical imaging and suggests pathways for broader demographic equity in clinical AI.
Submission Number: 291
Loading