Commute Your Domains: Trajectory Optimality Criterion for Multi-Domain Learning

Published: 11 Oct 2024, Last Modified: 04 Dec 2024M3L PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-domain learning, Lie bracket, Gradient dynamics, Domain interaction
Abstract: In multi-domain learning, a single model is trained on diverse data domains to leverage shared knowledge and improve generalization. The order in which the data from these domains is used for training can significantly affect the model's performance on each domain. However, this dependence is under-studied. In this paper, we investigate the influence of training order (or data mixing) in multi-domain learning using the concept of Lie bracket of gradient vector fields. By analyzing the infinitesimal effects of changing the training order, we identify regions in the parameter space where altering the order between two training domains can benefit the target loss. We validate the predictions of our theoretical framework on the influence of training order (or data mixing) both on a toy example and bilingual LLM pre-training.
Is Neurips Submission: No
Submission Number: 90
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview