Keywords: decision making under uncertainty, fairness-aware optimization, priority-based admissions, uncertainty quantification
TL;DR: We provide a framework to prevent fairness interventions in priority-based admissions from backfiring due to ML error by statistically quantifying the uncertainty of both the systemic fairness adjustment direction and individual decisions.
Abstract: Priority-based admission policies are widely used to determine who can access scarce resources. Such policies assign scores to arriving individuals and prioritize those with higher scores for admission. Ideally, the resources yield greater benefits for individuals with higher scores, while the scores also provide a mechanism for ensuring fairness in resource access, according to some agreed-upon metric. The core problem is that scores must be estimated from historical data, and so are prone to estimation error. As a result, well-intentioned interventions to promote fairness can backfire. Our contribution is to provide a framework for analytically adjusting these estimated scores, ensuring that fairness interventions are implemented with a high degree of confidence.
Submission Number: 110
Loading