Peer-to-Peer Model-Agnostic Meta-Learning

Published: 2024, Last Modified: 28 Jan 2026SAM 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, we propose two distributed meta-learning meth-ods implementable over a distributed peer-to-peer network. We consider a distributed problem where each computational node possesses a local model along with a private dataset used for training that model. Every node aims to minimize the local loss using the private data and the global loss through a weight - mixing strategy. All nodes undergo training using a few-shot multi-task learning method: model-agnostic meta-learning (MAML). We consider adapt, learn, and share methodology, where each model adapts to some private data samples to capture generalizable characteris-tics. Subsequently, each model learns by minimizing the loss using a different set of samples from the private dataset. Finally, each node shares certain model parameters with the neighboring nodes and updates the local parameters through aggregation. The goal is to train the local models across a diverse range of tasks, enabling them to quickly learn new tasks using a limited set of training examples. We use node-level MAML and network-level weight-mixing techniques for few-shot multi-task distributed meta-learning. We show numerical experiments to illustrate the performance of the proposed methods on real-world datasets.
Loading