Agents Aren't Agents: the Agency, Loyalty and Accountability Problems of AI agents

ICLR 2026 Conference Submission15008 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI agents, agency, alignment, fiduciary duties, large language models, loyalty, accountability
TL;DR: AI agents resemble human Agents but lack personhood and undivided loyalty, making agency law an unreliable governance tool.
Abstract: The rapid adoption of AI agents marks a shift from predictable digital services to systems entrusted with autonomous, judgment-like tasks. As people delegate more responsibility to these agents, questions of control, loyalty, and accountability become urgent. Yet today’s agents are operated through fragmented layers of control by developers, hosts, and providers, which blur lines of responsibility and divide loyalties before users ever interact with them. Without reconsideration, we risk misallocating responsibility, overstating loyalty, and obscuring who ultimately benefits from these systems. In this paper, we systematically discuss key issues that hinder AI agents from attaining true legal agency. We identify three unresolved problems: Agency—who is the principal and who is the agent in the polyadic governance of AI development and deployment; Loyalty—whether AI agents can serve the principal’s best interests; and Accountability—when AI agents make mistakes, who is responsible for them? We examine the technological foundations that give rise to these problems and highlight key limitations of the current agency law framework in addressing emerging issues related to AI agents. As a position paper, our study offers fresh perspectives on AI agents from a legal standpoint and could inspire new research directions in this domain.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 15008
Loading