Acting for the Right Reasons: Creating Reason-Sensitive Artificial Moral Agents

Published: 25 Sept 2024, Last Modified: 06 May 2025FEAREveryoneRevisionsBibTeXCC BY 4.0
Keywords: reinforcement learning, reason theory, machine ethics, moral justifiablity
TL;DR: We add a reason-based shield generator, which yields a moral shield, along with a mechanism for its iterative improvement, to the reinforcement learning architecture to obtain an artificial moral agent acting morally justified..
Abstract: We propose an extension of the reinforcement learning architecture that enables moral decision-making of reinforcement learning agents based on normative reasons. Central to this approach is a reason-based shield generator yielding a moral shield that binds the agent to actions that conform with recognized normative reasons so that our overall architecture restricts the agent to actions that are (internally) morally justified. In addition, we describe an algorithm that allows to iteratively improve the reason-based shield generator through case-based feedback from a moral judge.
Submission Number: 4
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview