When to Make Exceptions: Exploring Language Models as Accounts of Human Moral JudgmentDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 14 Dec 2022, 04:31NeurIPS 2022 AcceptReaders: Everyone
Keywords: AI safety, Social Aspects of Machine Learning, ethics, cognitive science, moral decision-making
TL;DR: We present a novel challenge set that highlights the flexibility of the human moral mind, analyze large language models' performance on it, and proposed a Moral Chain-of-Thought prompting strategy.
Abstract: AI systems are becoming increasingly intertwined with human life. In order to effectively collaborate with humans and ensure safety, AI systems need to be able to understand, interpret and predict human moral judgments and decisions. Human moral judgments are often guided by rules, but not always. A central challenge for AI safety is capturing the flexibility of the human moral mind — the ability to determine when a rule should be broken, especially in novel or unusual situations. In this paper, we present a novel challenge set consisting of moral exception question answering (MoralExceptQA) of cases that involve potentially permissible moral exceptions – inspired by recent moral psychology studies. Using a state-of-the-art large language model (LLM) as a basis, we propose a novel moral chain of thought (MoralCoT) prompting strategy that combines the strengths of LLMs with theories of moral reasoning developed in cognitive science to predict human moral judgments. MoralCoT outperforms seven existing LLMs by 6.2% F1, suggesting that modeling human reasoning might be necessary to capture the flexibility of the human moral mind. We also conduct a detailed error analysis to suggest directions for future work to improve AI safety using MoralExceptQA. Our data is open-sourced at https://huggingface.co/datasets/feradauto/MoralExceptQA and code at https://github.com/feradauto/MoralCoT.
Supplementary Material: pdf
10 Replies

Loading