A survey on transfer learning for evolving domains

04 Mar 2026 (modified: 07 Mar 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Transfer learning explores how to leverage knowledge from various tasks or domains (sources) to enhance predictive performance in related tasks or domains (targets). Typically, transfer learning research is segmented into several isolated sub-areas (such as domain generalisation, domain adaptation or multi-domain learning), each making distinct assumptions about target data availability, such as the quantity of data and labels available at training time. However, there are several real-world applications where these problems occur as a continuum, evolving from one stage to another as more data and labels are progressively collected from each domain. In those cases, a robust transfer learning solution should seamlessly integrate an expanding dataset and progressively improve its performance over time. In this survey, we review the state of the art in transfer learning from the perspective of this continuum, focusing on the data requirements of each method. We find that most methods are tailored to specific settings, and no current work considers an integrated view over the whole spectrum of data availability. We refer to this new perspective on transfer learning as Transfer Learning for Evolving Domains (TrED) and argue that it is an important and challenging direction for future research.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Yi_Zhou2
Submission Number: 7764
Loading