Contrastive and Non-Contrastive Strategies for Federated Self-Supervised Representation Learning and Deep Clustering

Published: 01 Jan 2024, Last Modified: 17 May 2025IEEE J. Sel. Top. Signal Process. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We investigate federated self-supervised representation learning (FedSSRL) and federated clustering (FedCl), aiming to derive low-dimensional representations of datasets distributed across multiple clients, potentially in a heterogeneous manner. Our proposed solutions for both FedSSRL and FedCl involves a comparative analysis from a broad learning context. In particular, we show that a two-stage model, beginning with representation learning and followed by clustering, is an effective learning strategy for both tasks. Notably, integrating a contrastive loss as regularizer significantly boosts performance, even if the task is representation learning. Moreover, for FedCl, a contrastive loss is most effective in both stages, whereas FedSSRL benefits more from a non-contrastive loss. These findings are corroborated by extensive experiments on various image datasets.
Loading