Keywords: Fair Regression, Selection Bias
Abstract: Selection bias is a prevalent challenge in real-world data analysis, often stemming from biased historical censoring policies. While there is a growing body of literature on fairness in mitigating accuracy disparities, few studies have considered the potential impact of selection bias in training data. Depending on the selection mechanism, significant differences can arise between the population distribution and the training data distribution. Therefore, the training fairness metric can be heavily biased, leading to unfair learning. To address this issue under the fair regression problem, we propose weighting adjustments in the fairness constraint, which results in a novel fair regression estimator. Despite non-convexity, we derive an efficient algorithm to obtain a globally optimal solution. This work pioneers the integration of weighting adjustments into the fair regression problem, introducing a novel methodology to constrain accuracy disparities under arbitrary thresholds.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12822
Loading