Intuitions of Compromise: Utilitarianism vs. Contractualism

Published: 10 Oct 2024, Last Modified: 15 Nov 2024Pluralistic-Alignment 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: value aggregation, nash product, expected utility, compromise, moral decision making, social welfare
TL;DR: Both performant LLMs and humans prefer the Nash Product, a contractualist approach, over a utilitarian approach in compromise cases.
Abstract: We are constantly faced with the question of how to aggregate preferences, views, perspectives and values. This is a problem for groups attempting to accommodate individuals with differing needs and interests, as will be our focus. It also applies to individual rational decision makers attempting to trade-off conflicting interests. The problem of "value aggregation" therefore crops up in myriads of places across the social sciences---in rational decision theory, social choice models, and proposals for systems of democratic voting, for instance. These sub-disciplines have formalized proposals for how to deal with value aggregation, though, remarkably, no research has yet directly compared people’s intuitions of two of the most obvious candidates for aggregation--taking the \textit{sum} of all the values (the classic "Utilitarian" approach) and the \textit{product} (a less well-known "contractualist" approach). In this paper, we systematically explore the proposals suggested by each algorithm, focusing on aggregating preferences across groups. Finally, we compare the judgments of large language models (LLMs) to that of our (human) participants, finding marked differences across model sizes. While the dominant assumptions in fields from decision theory, to AI, to philosophy have favored a utilitarian approach to value aggregation, we find that both humans and performant LLMs prefer a contractualist approach.
Submission Number: 39
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview