Track: Sociotechnical
Keywords: AI and law, Liability, AI and regulation, AI and policy, AI governance
TL;DR: We leverage existing ML and HCI research to provide insights on concrete questions that arise when applying traditional fault-based liability laws to AI agents.
Abstract: AI agents are loosely defined as systems capable of executing complex, open-ended tasks. Many have raised concerns that these systems will present significant challenges to regulatory/legal frameworks, particularly in tort liability. However, as there is no universally accepted definition of an AI agent, concrete analyses of these challenges are limited, especially as AI systems continue to grow in capabilities. In this paper, we argue that by focusing on properties of AI agents rather than the threshold at which an AI system becomes an agent, we can map existing technical research to explicit categories of “foreseeable harms” in tort liability, as well as point to “reasonable actions” that developers can take to mitigate harms.
Submission Number: 20
Loading