Advancing Moral Decision-Making for Autonomous Vehicles

Published: 01 Jan 2025, Last Modified: 20 May 2025CCNC 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Autonomous vehicles (AVs), critical for future intelligent transportation, owe their advanced capabilities to reinforcement learning, which facilitates their intelligent decision-making. As AV adoption increases, concerns remain about their behavior in situations with moral uncertainty. The limitation of operating in limited environments and the challenge of determining credence values for ethical theories limits the moral uncertainty of AVs. This paper incorporates comprehensive exploration of new moral theory and scenario into simulation frameworks that can help overcome the state of the art limitations. We introduce justice theory inspired by the moral machine framework in Uber research platform to study the role of fairness among individuals in morally uncertain situations. Furthermore, we introduce novel reward structures in the framework similar to deontological and utilitarian theory to comprehensively evaluate with the state of art voting methodologies namely Nash voting and Variance voting. It was found that the variance voting system was effective across both sequential and nonsequential environments, while Nash voting was suitable primarily for sequential settings.
Loading