Keywords: Dual encoding models, Federated learning, Representation learning, Self-supervised learning, Federated self-supervised learning
TL;DR: Novel approach for training dual encoding models on distributed data composed of many small, non-IID client datasets.
Abstract: Dual encoding models that encode a pair of inputs are widely used for representation learning. Many approaches train dual encoding models by maximizing agreement between pairs of encodings on centralized training data. However, in many scenarios, datasets are inherently decentralized across many clients, motivating federated learning. In this work, we focus on federated training of dual encoding models on decentralized data composed of many small, non-IID (independent and identically distributed) client datasets. Existing approaches require large and diverse training batches to work well and perform poorly when naively adapted to the setting of small, non-IID client datasets using federated averaging. We observe that large-batch loss computation can be simulated on small individual clients for loss functions that are based on encoding statistics. Based on this insight, we propose a novel federated training approach, Distributed Cross Correlation Optimization (DCCO), which trains dual encoding models using encoding statistics aggregated across clients, without sharing individual samples or encodings. Our experimental results on two datasets demonstrate that the proposed approach outperforms federated variants of existing approaches by a large margin.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/federated-training-of-dual-encoding-models-on/code)
0 Replies
Loading