Pre-Training and Fine-Tuning Effects on Out-of-Distribution Detection in Dermatology with Vision Foundation Models
Keywords: Out-of-Distribution Detection, Dermatology Image Classification, Vision Foundation Models, Mahalanobis Distance Scoring, Near-OOD Robustness
TL;DR: Aggressive fine-tuning of vision foundation models improves dermatology classification but degrades near-OOD detection, and a two-stage deployment strategy is proposed to reconcile diagnostic accuracy with clinical safety.
Abstract: Vision foundation models such as CLIP and ImageNet-pretrained transformers achieve strong performance in dermatological image classification, but their reliability under distribution shift remains unclear. This work systematically analyzes how pre-training and fine-tuning intensity affect out-of-distribution (OOD) detection in dermatology. Experiments on Derm7pt-clinic and PAD-UFES-20 across multiple adaptation regimes and post-hoc detectors reveal a consistent trade-off: stronger fine-tuning improves in-distribution (ID) accuracy but reduces OOD separability, especially for clinically realistic near-OOD samples. Across all settings, feature-space methods, particularly Mahalanobis Distance Scoring (MDS), outperform logit-based approaches, indicating that representation structure is the main factor governing OOD detection performance. These results suggest that aggressive task specialization weakens sensitivity to unfamiliar inputs by compressing the separation between ID and semantically adjacent OOD samples. Motivated by this trade-off, a two-stage deployment strategy is also evaluated, in which a geometry-preserving foundation model acts as a conservative OOD filter, followed by an ID-specialized classifier with an explicit “other” class for near-OOD cases. The results provide a practical perspective on safer deployment of dermatology foundation models under clinically realistic distribution shifts.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 4
Loading