Keywords: Federated Learning, Data sets, Evaluation, Image segmentation
TL;DR: This study evaluates the performance of the DINOv2 pretrained model in Federated Learning settings across various domains, finding it effective but highlighting the need for domain-specific fine-tuning and bias mitigation
Abstract: This study investigates the performance of the DINOv2 pretrained model within Federated Learning (FL) environments, focusing on its application to segmentation tasks across diverse domains. While DINOv2 has demonstrated high efficacy in centralized training scenarios, its capabilities under FL conditions—where data privacy and security are paramount—remain underexplored. Utilizing data sets spanning industrial, medical, and automotive sectors, we evaluated DINOv2's accuracy and generalization in decentralized settings. Our findings reveal that federated DINOv2 performs comparably to centralized models, effectively segmenting objects despite the decentralized and heterogeneous nature of the data. However, inherent biases in the pretrained model posed challenges, affecting performance across different domains. These results highlight the need for domain-specific fine-tuning and bias mitigation strategies to enhance the robustness of pretrained models in FL contexts. Future work should address these challenges to maximize the potential of FL in privacy-sensitive applications, ensuring high performance while maintaining data confidentiality.
Submission Number: 3
Loading