Abstract: Fine-tuning large language models (LLMs) raises privacy concerns due to the risk of exposing sensitive training data. Federated learning (FL) mitigates this risk by keeping training samples on local devices, while facing the following privacy problems. (i) Recent studies show that adversaries can still infer private information in FL. (ii) LLM parameters are shared publicly during federated fine-tuning, while developers are often reluctant to disclose these parameters. (iii) Existing works focus on secure inference but neglects privacy-preserving fine-tuning. Inspired by above problems, we propose PriFFT, a privacy-preserving federated fine-tuning mechanism, to protect both model parameters and users' privacy. Due to considerable LLM parameters, we present hybrid secret sharing combining arithmetic secret sharing (ASS) and function secret sharing (FSS) to build secure operations and implement secure neural network layers and activation functions for fine-tuning. We optimize several secure protocols based on FSS, including reciprocal calculation, tensor products, natural exponentiation, softmax, sigmoid, hyperbolic tangent, and dropout. The hybrid secret sharing enables PriFFT to apply our optimized FSS protocols while combining ASS protocols to support complex computation without extra communication. PriFFT reduces execution time and communication overhead in privacy-preserving fine-tuning up to 59.1% and 77.0% without accuracy drop compared to the existing secret sharing methods.
External IDs:doi:10.1109/tdsc.2026.3661572
Loading