A High-Dimensional Statistical Method for Optimizing Transfer Quantities in Multi-Source Transfer Learning
Keywords: multi-source transfer learning, K-L divergence, high-dimensional statistics
TL;DR: We propose a theoretical framework based on asymptotic analysis to determine optimal sample transfer quantities in multi-source transfer learning, yielding an efficient algorithm (OTQMS) that enhances accuracy and data efficiency.
Abstract: Multi-source transfer learning provides an effective solution to data scarcity in real-world supervised learning scenarios by leveraging multiple source tasks. In this field, existing works typically use all available samples from sources in training, which constrains their training efficiency and may lead to suboptimal results. To address this, we propose a theoretical framework that answers the question: what is the optimal quantity of source samples needed from each source task to jointly train the target model? Specifically, we introduce a generalization error measure based on K-L divergence, and minimize it based on high-dimensional statistical analysis to determine the optimal transfer quantity for each source task. Additionally, we develop an architecture-agnostic and data-efficient algorithm OTQMS to implement our theoretical results for target model training in multi-source transfer learning.  Experimental studies on diverse architectures and two real-world benchmark datasets show that our proposed algorithm significantly outperforms state-of-the-art approaches in both accuracy and data efficiency. The code is available at https://github.com/zqy0126/OTQMS.
Primary Area: General machine learning (supervised, unsupervised, online, active, etc.)
Submission Number: 20897
Loading