Powerformer: Efficient and High-Accuracy Privacy-Preserving Language Model with Homomorphic Encryption

ACL ARR 2025 February Submission8225 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We propose Powerformer, an efficient homomorphic encryption (HE)-based privacy-preserving language model (PPLM) designed to reduce computation overhead while maintaining model performance. Powerformer incorporates three key techniques to optimize encrypted computations: 1) A novel distillation technique that replaces softmax and layer normalization (LN) with computationally efficient power and linear functions, ensuring no performance degradation while enabling seamless encrypted computation. 2) A pseudo-sign composite approximation method that accurately approximates GELU and tanh functions with minimal computational overhead. 3) A homomorphic matrix multiplication algorithm specifically optimized for Transformer models, enhancing efficiency in encrypted environments. By integrating these techniques, Powerformer based on the BERT-base model achieves a 45\% reduction in computation time compared to the state-of-the-art HE-based PPLM without any loss in accuracy.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: privacy-preserving langua model, homomorphic encryption
Contribution Types: Approaches low compute settings-efficiency, Theory
Languages Studied: English
Submission Number: 8225
Loading