Uncertainty-Aware Systems for Human-AI Collaboration

TMLR Paper6671 Authors

26 Nov 2025 (modified: 07 Dec 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: \textit{Learning to defer} (\textbf{L2D}) algorithms improve human-AI collaboration (\textbf{HAIC}) by deferring decisions to human experts when they are more likely to be correct than the AI model. This framework hinges on machine learning (\textbf{ML}) models' ability to assess their own certainty and that of human experts. L2D struggles in dynamic environments, where distribution shifts impair deferral. We present two uncertainty-aware approaches to HAIC. First, we enhance L2D by combining ML outputs with density functions to improve uncertainty estimation and robustness. Second, we use density-based conformal prediction to assess epistemic uncertainty, dynamically balancing the assignment strategy by either employing L2D or deferring high-uncertainty instances directly to human experts. Both methods are the first uncertainty-aware approaches for HAIC that also address limitations of L2D systems including cost-sensitive scenarios, limited human predictions, and capacity constraints. Empirical evaluation in fraud detection shows both approaches outperform state-of-the-art baselines while improving calibration and supporting real-world adoption.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Ilia_Sucholutsky1
Submission Number: 6671
Loading