Who Should Do What? Adaptive Delegation in Human-AI Collaboration

Published: 28 Nov 2025, Last Modified: 30 Nov 2025NeurIPS 2025 Workshop MLxOREveryoneRevisionsBibTeXCC BY 4.0
Keywords: Human-AI, Generalized Nash Equilibrium, Agentic AI
Abstract: As human-AI collaboration becomes increasingly common in real-world decision-making systems, it is essential to develop principled frameworks for deciding who should act and when: the AI, the human, or both. In this paper, we develop optimal delegation strategies for settings where human oversight adds value but comes at a cost. We propose an adaptive delegation framework in which a central coordinator assigns each task to either the AI, the human, or a human-in-the-loop review process. Importantly, we model the human as a cost-sensitive and adaptive agent, whose effort adapts based on the AI’s accuracy. This interaction is formalized using a generalized Nash equilibrium framework, which allows us to characterize stable collaboration strategies under broad conditions. We provide theoretical guarantees that identify when adaptive delegation enables effective cooperation and how human-AI collaboration evolves according to the development of AI technologies. Numerical experiments confirm that our approach improves overall system performance under accuracy and cost constraints. These results offer practical guidance for designing agentic AI systems that balance efficiency with meaningful human involvement.
Submission Number: 228
Loading