Keywords: hardware security, compute governance, AI governance, AI policy, AI security, trusted execution environments, trusted platform modules, privacy preservation
TL;DR: This paper provides a research agenda for the development of hardware-enabled mechanisms for AI governance, surveying mechanisms such as location verification and offline licensing, and discussing previous work and open challenges.
Abstract: This paper surveys the emerging field of hardware-enabled mechanisms (HEMs). We describe four kinds of HEMs: location verification for enforcing export controls, offline licensing to permit hardware usage, network verification regarding the configuration of AI clusters and real-time communications between AI chips, and workload verification of the details of AI training and inference workloads. We discuss open research questions regarding each mechanism and cross-cutting technical and governance challenges for the field. Overall, we aim to provide a research roadmap for the development of hardware-enabled mechanisms (HEMs) for verifying responsible AI development.
Submission Number: 22
Loading