AI, Pluralism, and (Social) Compensation

Published: 10 Oct 2024, Last Modified: 15 Nov 2024Pluralistic-Alignment 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: personalization, human-AI team, compensatory algorithms
TL;DR: This paper examines the ethical implications of compensatory behaviors in personalized AI systems and proposes a framework for evaluating their permissibility, contributing to ongoing debates on AI ethics, autonomy, and value alignment.
Abstract: One strategy in response to pluralistic values in a user population is to personalize an AI system: if the AI can adapt to the specific values of each individual, then we can potentially avoid many of the challenges of pluralism. Unfortunately, this approach creates a significant ethical issue: if there is an external measure of success for the human-AI team, then the adaptive AI system may develop strategies (sometimes deceptive) to compensate for its human teammate. This phenomenon can be viewed as a form of ``social compensation,'' where the AI makes decisions based not on predefined goals but on its human partner's deficiencies in relation to the team's performance objectives. We provide a practical ethical analysis of the conditions in which such compensation may nonetheless be justifiable.
Submission Number: 30
Loading