Nesterov Meets Robust Multitask Learning Twice

Published: 26 Oct 2023, Last Modified: 13 Dec 2023NeurIPS 2023 Workshop PosterEveryoneRevisionsBibTeX
Keywords: multi-task learning, smoothing, duality, regularizer
Abstract: In this paper, we study temporal multitask learning problem where we impose smoothness constraint on time-series weights. Besides, to select important features, group lasso is introduced. Moreover, the regression loss in each time frame is non-squared to alleviate the influence of various scales of noise in each task, in addition to the nuclear norm for low-rank property. We first formulate the objective as a max-min problem, where the dual variable can be optimized via accelerated dual ascent method, while the primal variable can be solved via \textit{smoothed Fast Iterative Shrinkage-Thresholding Algorithm} (S-FISTA). We provide convergence analysis of the proposed method and experiments demonstrate its effectiveness.
Submission Number: 58