Keywords: artificial intelligence, homomorphic encryption, ethics, privacy
TL;DR: Privacy is not everything in privacy-preserving Artificial Intelligence with Homomorphic Encryption.
Abstract: Homomorphic Encryption (HE) is gaining traction in Artificial Intelligence (AI) as a solution to privacy concerns, particularly in sensitive areas such as health-care. HE enables computation directly on encrypted data without ever decrypting it, ensuring that private information remains protected throughout the process, effectively enabling Privacy-Preserving Machine and Deep Learning (PP-MDL). While much of the discussion focuses on its privacy benefits, little attention has been paid to the ethical implications of using and training AI models on encrypted data, especially when the resulting models themselves remain encrypted and opaque to both developers and users. In this paper, we explore three ethical perspectives on the use of HE for PP-MDL: the Good, i.e., the clear advantages in terms of privacy and data protection; the Bad, i.e., the practical and conceptual ethical challenges it introduces; and the Ugly, i.e., the subtle, unexpected ethical issues that may arise when HE-powered PP-MDL is deployed in the real-world. Our aim is to show that while HE can strengthen privacy, it is not a silver bullet for ethical AI. It can complicate accountability, transparency, and trust, raising important ethical and societal questions that should not be overlooked.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 3011
Loading