Abstract: This tutorial presents privacy threats to artificially intelligent (AI) systems and proposes the use of several privacy-enhancing technologies (PETs) to address them. Such threats can affect both model owners and system users, be internal or external to the system, and manifest at multiple stages of the AI lifecycle (i.e., the model development, deployment, and inference phases). PETs are next-generation technologies, uniquely addressing threats to "data-in-use" processes that have previously gone unprotected. In this tutorial, we explore the impact of PETs on a variety system-level characteristics.
External IDs:doi:10.1145/3643651.3659889
Loading