Privacy-Preserving Deep Learning: A Survey on Theoretical Foundations, Software Frameworks, and Hardware Accelerators
Abstract: Deep Learning as a Service (DLaaS) has become a cornerstone in enabling access to deep learning capabilities, allowing users to train models or leverage pre-trained ones through APIs. This paradigm significantly lowers the barrier to entry for deploying complex AI systems, making cutting-edge technologies accessible to a broader audience. However, the growing reliance on DLaaS poses significant privacy concerns, mainly when sensitive data is involved. Addressing these challenges necessitates Privacy-Preserving Deep Learning (PPDL), an emerging field focused on safeguarding data and model privacy during training and inference without compromising utility or performance. This paper comprehensively reviews the current PPDL landscape from theoretical, software, and hardware perspectives. We analyze over 100 recent works in areas such as Homomorphic Encryption, Functional Encryption, Multi-Party Computation, Trusted Execution Environments, Federated Learning, and Differential Privacy. We detail core methodologies, technologies, and their applications, comparing approaches to highlight strengths and weaknesses. Visual summaries of key contributions are presented to aid understanding and provide an accessible overview of the field. Furthermore, we discuss the limitations of existing techniques and identify future research directions.
Loading