Keywords: reinforcement learning, reason theory, machine ethics, moral justifiablity
TL;DR: We add a reason-based shield generator, which yields a moral shield, along with a mechanism for its iterative improvement, to the reinforcement learning architecture to obtain an artificial moral agent acting morally justified..
Abstract: We propose an extension of the reinforcement learning architecture that enables moral decision-making of reinforcement learning agents based on normative reasons. Central to this approach is a reason-based shield generator yielding a moral shield that binds the agent to actions that conform with recognized normative reasons so that our overall architecture restricts the agent to actions that are (internally) morally justified. In addition, we describe an algorithm that allows to iteratively improve the reason-based shield generator through case-based feedback from a moral judge.
Submission Number: 4
Loading