Did the Robot Really Intend to Harm Me? The Effect of Perceived Agency and Intention on Fairness Judgments

Published: 01 Jan 2025, Last Modified: 16 May 2025HRI 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Determining whether a robot's actions will be perceived as fair or unfair is complicated in Human-Robot Interaction (HRI), where factors like the robot's perceived agency and intent may influence these judgments. We report findings from two experiments that examine how people evaluate fairness after reviewing a scenario where a robot harms a human. In these experiments, we manipulate different aspects of the context: the fairness of a situation (Fair vs. Unfair); the perceived agency of a robot that commits the harm (High Agency vs. Low Agency); and the perceived intention behind the harmful action (Intentional vs. Unintentional). We examine fairness as a multifaceted construct, using Fairness Theory to capture three key components: reduced welfare, conduct, and moral transgression. We find that this multifaceted perspective can capture nuances in fairness judgments. When robots are perceived to have greater decision-making autonomy, humans tend to assign higher moral responsibility, especially when harmful actions appear intentional. Conversely, when robots are seen as merely following predetermined programming, people focus more on the possibility that the programming could have been designed differently. These findings highlight how agency and intention need to be considered when investigating fairness in HRI.
Loading