Assessing Gender and Age Influences on Moral Decision Making in Autonomous Vehicles Using Large Language Models

Published: 01 Jan 2025, Last Modified: 20 May 2025CCNC 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Autonomous systems, especially those in safety-critical applications like Autonomous Vehicles (AVs), require human-like reasoning capabilities to make ethical decisions. Large Language Models (LLMs) have shown potential in simulating diverse human moral responses, offering insights into how different moral frameworks, such as utilitarianism and deontological ethics, could enhance decision-making algorithms in AVs. Existing research indicates that LLMs tend to align with commonsense morality in morally unambiguous cases, but face challenges in providing detailed justifications for their choices. Studies leveraging frameworks like the Moral Machine and Moral Foundations Theory have explored how LLMs simulate human preferences. Despite this progress, a significant gap remains in understanding how gender and age impact moral preferences when decisions are influenced by LLMs in autonomous systems. This paper addresses this gap by investigating how LLM-based systems can reflect and adapt to moral preferences across gender and age groups, while ensuring that these systems offer transparent explanations that align with human moral intuitions in high-stakes AV decision-making scenarios. It was found that LLM models demonstrated diverse tendencies: some leaned towards favoring younger individuals over older ones, while others displayed a subtle preference for males in decision-making situations, highlighting differences in how the models prioritized age and gender.
Loading