The Price of Justice in Machine Learning: Fair Division with Subjective Value under Bounded Rationality

09 Mar 2026 (modified: 01 May 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Statistical fairness criteria such as Demographic Parity, Equalized Odds, and Calibration are widely used in machine learning, but rate constraints do not by themselves specify how decision burdens are distributed across individuals. We develop a harm-allocation framework that models heterogeneous subjective error costs and evaluates decisions through fair-division axioms. Within this framework, we identify conditions under which common statistical criteria serve as valid surrogates for harm-based fairness, construct counterexamples where cost heterogeneity breaks this surrogate relationship, and reinterpret classic incompatibilities as conflicts between allocation principles. We further establish approximation lower bounds under bounded-information settings, including noisy or coarse cost information and restricted policy classes. Experiments connect these theoretical results to empirical behavior, showing when surrogate metrics align with or diverge from harm-based fairness, revealing approximation floors, and tracing these patterns across tasks, models, and fairness interventions under finer-grained harm diagnostics.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Olawale_Elijah_Salaudeen1
Submission Number: 7849
Loading