TL;DR: Extend Two-Stage Learning-to-Defer to multi-task problems, providing additional theoretical guarantees.
Abstract: The Two-Stage Learning-to-Defer (L2D) framework has been extensively studied for classification and, more recently, regression tasks. However, many real-world applications require solving both tasks jointly in a multi-task setting. We introduce a novel Two-Stage L2D framework for multi-task learning that integrates classification and regression through a unified deferral mechanism. Our method leverages a two-stage surrogate loss family, which we prove to be both Bayes-consistent and $(\mathcal{G}, \mathcal{R})$-consistent, ensuring convergence to the Bayes-optimal rejector. We derive explicit consistency bounds tied to the cross-entropy surrogate and the $L_1$-norm of agent-specific costs, and extend minimizability gap analysis to the multi-expert two-stage regime. We also make explicit how shared representation learning—commonly used in multi-task models—affects these consistency guarantees. Experiments on object detection and electronic health record analysis demonstrate the effectiveness of our approach and highlight the limitations of existing L2D methods in multi-task scenarios.
Lay Summary: Many real-world applications—such as object detection or electronic health record analysis—require solving classification and regression jointly. However, existing Learning-to-Defer methods treat these tasks separately, leading to suboptimal or inconsistent decisions. We propose a unified Two-Stage Learning-to-Defer framework for multi-task learning, enabling coordinated deferral across both tasks. Our method introduces theoretical guarantees, including Bayes-consistent surrogate losses, tight consistency bounds, and novel minimizability gap characterizations. It also provides insight into the interplay between shared representation learning, consistency bounds, and minimizability gap.
Primary Area: Theory->Learning Theory
Keywords: Learning to defer, learning from abstention, learning theory, multi task
Submission Number: 2395
Loading