Fair Continuous Resource Allocation with Equality of Impact

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: resource allocation, fairness, machine learning, fair machine learning, equality of impact, group fairness, fair resource allocation, fairness regret, constant regret, online learning, marginal returns, constraint learning, censored feedback, online convex optimization
Abstract: Recent works have studied fair resource allocation in social settings, where fairness is judged by the impact of allocation decisions rather than more traditional minimum or maximum thresholds on the allocations themselves. Our work significantly adds to this literature by developing continuous resource allocation strategies that adhere to *equality of impact*, a generalization of equality of opportunity. We derive methods to maximize total welfare across groups subject to minimal violation of equality of impact, in settings where the outcomes of allocations are unknown but have a diminishing marginal effect. While focused on a two-group setting, our study addresses a broader class of welfare dynamics than explored in prior work. Our contributions are threefold. First, we introduce *Equality of Impact (EoI)*, a fairness criterion defined via group-level impact functions. Second, we design an online algorithm for non-noisy settings that leverages the problem’s geometric structure and achieves constant cumulative fairness regret. Third, we extend this approach to noisy environments with a meta-algorithm and empirically demonstrate that our methods find fair allocations and perform competitively relative to representative baselines.
Primary Area: Social and economic aspects of machine learning (e.g., fairness, interpretability, human-AI interaction, privacy, safety, strategic behavior)
Submission Number: 23659
Loading