Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 spotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Random Matrix Theory ; Optimization ; Regularization ; Multi-task regression ; Multi-task learning ; Multivariate Time Series Forecasting
TL;DR: This paper introduces a new multi-task regression framework using random matrix theory for improved performance estimation and presents a regularization-based optimization with empirical methods, enhancing univariate models.
Abstract: In this paper, we introduce a novel theoretical framework for multi-task regression, applying random matrix theory to provide precise performance estimations, under high-dimensional, non-Gaussian data distributions. We formulate a multi-task optimization problem as a regularization technique to enable single-task models to leverage multi-task learning information. We derive a closed-form solution for multi-task optimization in the context of linear models. Our analysis provides valuable insights by linking the multi-task learning performance to various model statistics such as raw data covariances, signal-generating hyperplanes, noise levels, as well as the size and number of datasets. We finally propose a consistent estimation of training and testing errors, thereby offering a robust foundation for hyperparameter optimization in multi-task regression scenarios. Experimental validations on both synthetic and real-world datasets in regression and multivariate time series forecasting demonstrate improvements on univariate models, incorporating our method into the training loss and thus leveraging multivariate information.
Primary Area: Learning theory
Submission Number: 17983
Loading