Moral Mimicry: Large Language Models Produce Moral Rationalizations Tailored to Political IdentityDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 08 Sept 2023CoRR 2022Readers: Everyone
Abstract: Large Language Models (LLMs) have demonstrated impressive capabilities in generating fluent text, as well as tendencies to reproduce undesirable social biases. This study investigates whether LLMs reproduce the moral biases associated with political groups in the United States, an instance of a broader capability herein termed moral mimicry. This hypothesis is explored in the GPT-3/3.5 and OPT families of Transformer-based LLMs. Using tools from Moral Foundations Theory, it is shown that these LLMs are indeed moral mimics. When prompted with a liberal or conservative political identity, the models generate text reflecting corresponding moral biases. This study also explores the relationship between moral mimicry and model size, and similarity between human and LLM moral word use.
0 Replies

Loading