Embracing Contradiction: Theoretical Inconsistency Will Not Impede the Road of Building Responsible AI Systems

Published: 24 Dec 2025, Last Modified: 24 Dec 2025MURE Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Published paper track
Published Or Accepted: true
Publication Venue: NeurIPS 2025 Position paper track
Keywords: Responsible AI, metric inconsistency, value pluralism, multi-objective optimization, Pareto front
TL;DR: Treat contradictions among Responsible AI metrics as features that preserve plural values, capture ethical complexity, and regularize models; define acceptable inconsistency bands and build tools to navigate the tradeoffs.
Abstract: This paper argues that inconsistency among Responsible AI metrics is a feature, not a flaw. Treating divergent metrics as distinct objectives yields three benefits: normative pluralism that reflects diverse stakeholder values; epistemological completeness that preserves richer information about complex ethical concepts; and implicit regularization that prevents overfitting to any single metric and improves robustness. Forcing consistency by pruning metrics narrows values, reduces conceptual depth, and can harm performance. We call for defining acceptable inconsistency and the mechanisms that enable practical, robust alignment.
Submission Number: 3
Loading