Tasks Without Borders: A New Approach to Online Multi-Task LearningDownload PDF

Anonymous

16 May 2019 (modified: 05 May 2023)AMTL 2019Readers: Everyone
Keywords: multi-task learning, dependent data, time series
TL;DR: A new algorithm for online multi-task learning that learns without restarts at the task borders
Abstract: We introduce MTLAB, a new algorithm for learning multiple related tasks with strong theoretical guarantees. Its key idea is to perform learning sequentially over the data of all tasks, without interruptions or restarts at task boundaries. Predictors for individual tasks are derived from this process by an additional online-to-batch conversion step. By learning across task boundaries, MTLAB achieves a sublinear regret of true risks in the number of tasks. In the lifelong learning setting, this leads to an improved generalization bound that converges with the total number of samples across all observed tasks, instead of the number of examples per tasks or the number of tasks independently. At the same time, it is widely applicable: it can handle finite sets of tasks, as common in multi-task learning, as well as stochastic task sequences, as studied in lifelong learning.
0 Replies

Loading