Keywords: model routing, domain adaptation, theory
Abstract: The rapid proliferation of domain-specialized machine learning models
presents a challenge: while individual models excel in specific
domains, their performance varies significantly across diverse
applications. This makes selecting the optimal model when faced with
an unknown mixture of tasks, especially with limited or no data
to estimate the mixture, a difficult problem. We address this
challenge by formulating it as a multiple-source domain adaptation
(MSA) problem. We introduce a novel, scalable algorithm that
effectively routes each input to the best-suited model from a pool of
available models. Our approach provides a strong performance
guarantee: remarkably, for any mixture domain, the accuracy achieved by the best
source model is maintained. This guarantee is established through a
theoretical bound on the regret for new domains, expressed as a convex
combination of the best regrets in the source domains, plus a
concentration term that diminishes as the amount of source data
increases. While our primary contributions are theoretical and
algorithmic, we also present empirical results demonstrating the
effectiveness of our approach.
Primary Area: General machine learning (supervised, unsupervised, online, active, etc.)
Submission Number: 9150
Loading