Understanding the Resource Cost of Fully Homomorphic Encryption in Quantum Federated Learning

TMLR Paper6569 Authors

19 Nov 2025 (modified: 11 Mar 2026)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Quantum Federated Learning (QFL) enables distributed training of Quantum Machine Learning (QML) models by sharing model gradients instead of raw data. However, these gradients can still expose sensitive user information. To enhance privacy, homomorphic encryption of parameters has been proposed as a solution in QFL and related frameworks. In this work, we evaluate the overhead introduced by Fully Homomorphic Encryption (FHE) in QFL setups and assess its feasibility for real-world applications. We implemented various QML models including a Quantum Convolutional Neural Network (QCNN) trained in a federated environment with parameters encrypted using the CKKS scheme. This work marks the first QCNN trained in a federated setting with CKKS-encrypted parameters. Models of varying architectures were trained to predict brain tumors from MRI scans. The experiments reveal that memory and communication overhead remain substantial, making FHE challenging to deploy. Minimizing overhead requires reducing the number of model parameters, which, however, leads to a decline in classification performance, introducing a trade-off between privacy and model complexity.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Dear Reviewers, We would like to express our sincere gratitude for the thoughtful and constructive feedback provided during the review process. In response, we have carefully revised our manuscript to address the comments and enhance the overall quality of our work. Below is a summary of the all changes we have made since the last submission: - **Implementation Details:** Added a detailed list of collected metrics and how they are derived, e.g., communication overhead. See Chapter A.1, p. 17. (Requested by reviewer mJTq) - **Experimental Configuration:** Provided full configuration of our experiments, including number of clients and CKKS parameters. See Chapter A.2, p. 17. (Requested by reviewer mJTq) - **Model Architecture Details:** Added tables in the appendix listing each layer and corresponding operations of the underlying models. See Chapter A.3, p. 17. (Requested by reviewer mJTq) - **VQC Parameter Updates:** Clarified that parameters of our VQCs are iteratively updated using conventional machine learning methods on a classical computer, as our experiments are simulations. See Chapter 3.1, p. 3. (Requested by reviewer 5peD) - **Quantum Runtime Statements:** Corrected statements regarding runtimes of quantum simulations versus actual quantum computers with small numbers of qubits. See Chapter 5.1, p. 7. (Requested by reviewer 5peD) - **Quantum Hardware Limitations:** Added a note that our study does not capture overhead from real quantum hardware, e.g., gradient estimation or state collapse upon measurement. See Chapter 6.2, p. 12. (Requested by reviewer 5peD) - **Gradient Aggregation:** Clarified that FedAvg was used to aggregate model updates. See Chapter 4, p. 5. (Requested by reviewer Dspu) - **Related Work Update:** Included Jiang et al. (2025) in the related works section. See Chapter 2, p. 2. (Requested by reviewer Dspu) We hope these changes effectively address your concerns and further strengthen our submission. Thank you once again for your valuable feedback and time. We are available to answer any further questions. Sincerely, TMLR Paper6569 Authors
Assigned Action Editor: ~Junchi_Yan1
Submission Number: 6569
Loading