Enhancing trust of deep learning models with post-quantum digital signatures

Published: 2025, Last Modified: 26 Feb 2026J. Supercomput. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: High-performance computing (HPC) is crucial for artificial intelligence (AI) and deep learning (DL) but faces challenges related to scalability, data transfer costs, and security risks. Federated Learning (FL) enables collaborative model training without centralized data aggregation. However, FL introduces vulnerabilities, as exchanged models can be intercepted and manipulated, necessitating robust cryptographic protection. With the advent of quantum computing, traditional security mechanisms are at risk, requiring the adoption of Post-Quantum Cryptographic (PQC) algorithms. This study benchmarks three PQC digital signature algorithms: Falcon, SPHINCS+, and ML-DSA. Their execution time, memory usage, and computational efficiency are evaluated in a simulated FL setting. To extend the analysis, different cryptographic hash functions (SHA3-256, SHA3-512, and BLAKE3) are analyzed to assess hashing efficiency under varying computational loads. Both centralized and decentralized FL scenarios are simulated, incorporating PQC-based digital signatures at each phase of the communication pipeline to ensure model integrity and authenticity. The results provide insights into the trade-offs between security and computational overhead, guiding the selection of scalable cryptographic solutions for FL. Falcon and ML-DSA demonstrate minimal impact on computational performance, making them strong candidates for securing FL environments. Future research directions include the direct signing of DL models to enhance security and the integration of widely used FL libraries for more realistic evaluations. These advancements could improve the practical deployment of post-quantum security solutions in FL, ensuring resilience against emerging quantum threats.
Loading