TL;DR: We introduce adapters to enhance pre-trained univariate time-series foundation models (FMs) for multivariate tasks, addressing feature dependencies and uncertainty quantification.
Abstract: Pre-trained foundation models (FMs) have shown exceptional performance in univariate time series forecasting tasks. However, several practical challenges persist, including managing intricate dependencies among features and quantifying uncertainty in predictions. This study aims to tackle these critical limitations by introducing **adapters**—feature-space transformations that facilitate the effective use of pre-trained univariate time series FMs for multivariate tasks. Adapters operate by projecting multivariate inputs into a suitable latent space and applying the FM independently to each dimension. Inspired by the literature on representation learning and partially stochastic Bayesian neural networks, we present a range of adapters and optimization/inference strategies. Experiments conducted on both synthetic and real-world datasets confirm the efficacy of adapters, demonstrating substantial enhancements in forecasting accuracy and uncertainty quantification compared to baseline methods. Our framework, **AdaPTS**, positions adapters as a modular, scalable, and effective solution for leveraging time series FMs in multivariate contexts, thereby promoting their wider adoption in real-world applications. We release the code at https://github.com/abenechehab/AdaPTS.
Lay Summary: While powerful AI models called foundation models have revolutionized forecasting for single time series (like predicting one stock price), they struggle with real-world scenarios involving multiple interconnected variables (like predicting temperature, humidity, and wind speed simultaneously), as these models can't handle the complex relationships between different variables and don't tell us how confident they are in their predictions—both critical issues for practical applications. To address this challenge, we developed AdaPTS, a framework that uses "adapters"—smart translation layers that help existing single-variable forecasting models work with multiple variables by acting as interpreters that take complex multi-dimensional data, transform it into a format the pre-trained model understands, then let the model make predictions for each dimension separately. We designed several types of adapters inspired by advanced machine learning techniques that not only improve accuracy but also provide uncertainty estimates.
Link To Code: https://github.com/abenechehab/AdaPTS
Primary Area: Applications->Time Series
Keywords: Multivariate time series forecasting, Foundation models, Probabilistic inference
Submission Number: 11687
Loading