Human Expertise Really Matters! Mitigating Unfair Utility Induced by Heterogenous Human Expertise in AI-assisted Decision-Making

25 Sept 2024 (modified: 22 Jan 2025)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI for good, Human-centric ML, Fairness, Calibration
Abstract: AI-assisted decision-making often involves an AI model providing confidence, which helps human decision-makers integrate these with their own confidence to make higher-utility final decisions. However, when human decision-makers are heterogeneous in their expertise, existing AI-assisted decision-making may fail to provide fair utility across them. Such unfairness raises concerns about social welfare among diverse human decision-makers due to inequities in access to equally effective AI assistance, which may reduce their willingness and trust to engage with AI systems. In this work, we investigate how to calibrate AI confidence to provide fair utility for human decision-makers. We first demonstrate that rational decision-makers with heterogeneous expertise are unlikely to obtain fair decision utility from existing AI confidence calibrations. We propose a novel confidence calibration criterion, *inter-group-alignment*, which synergizes with human-alignment to jointly determine the upper bound of utility disparity across human decision-maker groups. Building on this foundation, we propose a new fairness-aware confidence calibration method, *group-level multicalibration*, which ensures a sufficient condition for achieving both inter-group-alignment and human-alignment. We validate our theoretical findings through extensive experiments on four real-world multimodal tasks. The results indicate that our calibrated AI confidence facilitates fairer utility, concurrently enhancing overall utility. *The implementation code is available at* [https://anonymous.4open.science/r/iclr4103](https://anonymous.4open.science/r/iclr4103).
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4103
Loading