Personalized Federated Learning for Text Classification with Gradient-Free Prompt TuningDownload PDF

Anonymous

17 Apr 2023ACL ARR 2023 April Blind SubmissionReaders: Everyone
Abstract: In this paper, we study personalized federated learning for text classification with Pretrained Language Models (PLMs). We identify two challenges in efficiently leveraging PLMs for personalized federated learning: 1) Communication. PLMs are usually large in size, e.g., with hundreds of millions of parameters, inducing huge communication cost in a federated setting. 2) Local Training. Training with PLMs generally requires back-propagation, during which memory consumption can be several times that of the forward-propagation. This may not be affordable when the PLMs are trained locally on the clients, since the clients may be resource constrained, e.g., mobile devices with limited access to memory resources. Additionally, the PLMs can be provided as concealed APIs, for which the back-propagation operations may not be available. For the first challenge, we adopt prompt tuning for PLMs that only train with the prompt parameters, while the pretrained parameters are frozen. We further propose a compression method for the learned prompts to reduce communication cost. For the second challenge, we propose a gradient-free approach based on discrete local search with natural language tokens, circumventing gradient computation with back-propagation, while also reducing the communication cost. Experiments on multiple datasets demonstrates the effectiveness of our method.
Paper Type: long
Research Area: Efficient/Low-Resource Methods for NLP
0 Replies

Loading