Efficient Privacy-Preserving Federated Learning With Selective Parameter Encryption

TMLR Paper5103 Authors

13 Jun 2025 (modified: 22 Jun 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Federated learning trains machine learning models on distributed devices by aggregating local model updates instead of local data. However, privacy concerns arise as aggregating local models on the server may expose sensitive information through inversion attacks. Thus, privacy-preserving methods, such as homomorphic encryption (HE), then become necessary for FL training. However, despite HE's advantages, applying it to FL training suffers from impractical overheads, especially for foundation models. In this paper, we present an efficient, privacy-preserving federated learning framework that uses selective parameter encryption with theoretical guarantees. Our approach proposes to selectively encrypt sensitive parameters, significantly reducing both computation and communication overheads during training while providing a quantifiable privacy guarantee. Our framework shows considerable overhead reduction, particularly for large foundation models (e.g. 100x reduction for GPT-2), demonstrating its potential for scalable HE-based FL deployment.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=bcrGQAIReT
Changes Since Last Submission: Added the top header showing the submission is under review.
Assigned Action Editor: ~Yaoliang_Yu1
Submission Number: 5103
Loading