Abstract: Foundation models have been successfully adapted to the task of time series forecasting due to their ability to capture long-range dependencies, as demonstrated in the field of Natural Language Processing (NLP). However, effectiveness of applying these pre-trained time series foundation models (TSFMs) in the target domain is limited due to the need for hyperparameter optimization to match the characteristics of the target domain. To address this limitation, we propose a novel algorithm AT4TS: Autotune for Time Series Foundation Models that aims to efficiently automate the process of selective fine-tuning of pre-trained TSFMs for a given target domain. Our approach helps remove the tedious task of accurately configuring the tunable hyperparameters required to selectively update parameters to enhance predictive performance on unseen out-of-domain target datasets. AT4TS has been validated through diverse pre-trained models like Chronos and Tiny Time Mixers (TTM), fine-tuning strategies like Low Rank Adaptation (LoRA) and custom fine-tuning and state-of-the-art hyperparameter optimization (HPO) methods. Extensive experimental results on real-world benchmark datasets demonstrate that AT4TS efficiently identifies the optimal configuration of tunable hyperparameters for autotuning TSFMs. We show improvements as high as 20.55% and 45.34% for one of the out-of-domain datasets compared to zero-shot pre-trained models for Chronos and TTM respectively.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Added acknowledgements along with minor revisions requested by the Action Editor.
Assigned Action Editor: ~Mingsheng_Long2
Submission Number: 4931
Loading