Generalized Prompt Tuning: How to Use a Frozen Pre-Trained Univariate Time Series Foundation Model for Multivariate Time Series Prediction

Published: 10 Oct 2024, Last Modified: 26 Nov 2024NeurIPS 2024 TSALM WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: time series, foundation model, prompt tuning
Abstract: Time series foundation models are pre-trained on large datasets and are able to achieve state-of-the-art performance in diverse tasks. However, we observe that currently, the majority of time series foundation models either are univariate in nature, or assume channel independence, meaning that they handle multivariate time series but do not model how the different variables relate. In this paper, we propose a prompt-tuning-inspired fine-tuning technique, Generalized Prompt Tuning (Gen-P-Tuning), that enables us to adapt an existing univariate time series foundation model (treated as frozen) to handle multivariate time series prediction. Our approach provides a way to combine information across channels (variables) of multivariate time series. We demonstrate the effectiveness of our fine-tuning approach against various baselines on 8 classification and 4 forecasting datasets. Our code is available at: https://github.com/Ilovecodingforever/Gen-P-Tuning
Submission Number: 72
Loading