When engineering meets economics: AI-powered safe and accountable autonomous driving

Xuan Di, Herbert Dawid, Gerd Muehlheusser

Published: 01 Sept 2025, Last Modified: 07 Nov 2025Artificial Intelligence for TransportationEveryoneRevisionsCC BY-SA 4.0
Abstract: By now, AI has already affected many aspects of daily life, and its importance can be expected to grow even larger. Relying heavily on AI, autonomous vehicles (AV) are complex engineered systems that can make life and death decisions. Because of their profound impact on society, AVs must be designed and developed to be safe, accountable, and ultimately trustworthy to stakeholders and responsible to the society. This paper aims to discuss a pathway to design and develop safe and accountable AVs from the interdisciplinary perspective of engineering, economics and the economic analysis of law. We will primarily discuss different approaches in programming safety rules into AV decision making, and how these approaches fall within the behavior or outcome oriented paradigm. Building on these paradigms, the widely used reinforcement learning approach to train AVs belong to outcome based, while the imitation learning based approach belong to behavior based. Understanding what driving tasks belong to which paradigm would facilitate the design of safety principles to build trust. Also, the distinction between outcome based and behavior based approaches helps to connect the task of training AVs to well-established results from agency theory on how to optimally induce desired actions from human agents. Agency theory also allows us to provide guidance for aligning the interests of different parties in the AV value chain. We also investigate how tort liability can foster the accountability of AVs.
Loading