Abstract: We study few-shot Natural Language Understanding (NLU) tasks with Large Language Models (LLMs) in federated learning (FL) scenarios, which is challenging due to limited data and mobile device constraints. Recent studies show LLMs can handle tasks like sentiment analysis and arithmetic reasoning. However, their large sizes lead to high computation and communication costs, making traditional FL impractical. To address this, we propose Low-Parameter Federated Learning (LP-FL), which combines LLM prompt learning with efficient communication and federating techniques. LP-FL enables clients to assign soft labels to unlabeled data, expanding the labeled set during FL. It uses Low-Rank Adaptation (LoRA) for cost-efficient parameter construction, local model fine-tuning, and global model federation. LP-FL performs outstandingly in sentiment analysis of various FL scenarios and can be comparable to centralized training in a small number of scenarios. Notably, in a semi-supervised context, LP-FL demonstrates more robustness than FP-FL. This is attributed to the utilization of fewer parameters in LP-FL, which renders it less vulnerable to the adverse effects of overfitting caused by error noise in semi-supervised scenarios, resulting in superior performance compared to FP-FL.
Loading