Linear Speedup in Personalized Collaborative Learning

TMLR Paper611 Authors

18 Nov 2022 (modified: 17 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Collaborative training can improve the accuracy of a model for a user by trading off the model's bias (introduced by using data from other users who are potentially different) against its variance (due to the limited amount of data on any single user). In this work, we formalize the personalized collaborative learning problem as a stochastic optimization of a task $0$ while given access to $N$ related but different tasks $1,\dots, N$. We give convergence guarantees for two algorithms in this setting---a popular collaboration method known as \emph{weighted gradient averaging}, and a novel \emph{bias correction} method---and explore conditions under which we can achieve linear speedup w.r.t. the number of auxiliary tasks $N$. Further, we also empirically study their performance confirming our theoretical insights.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Qin_Zhang1
Submission Number: 611
Loading