In-Context Fine-Tuning for Time-Series Foundation Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Motivated by the recent success of time-series foundation models for zero-shot forecasting, we present a methodology for _in-context fine-tuning_ of a time-series foundation model. In particular, we design a pretrained foundation model that can be prompted (at inference time) with multiple time-series examples, in order to forecast a target time-series into the future. Our foundation model is specifically trained to utilize examples from multiple related time-series in its context window (in addition to the history of the target time-series) to help it adapt to the specific distribution of the target domain at inference time. We show that such a foundation model that uses in-context examples at inference time can obtain much better performance on popular forecasting benchmarks compared to supervised deep learning methods, statistical models, and other time series foundation models. Interestingly, our in-context fine-tuning approach even matches the performance of a foundation model that is explicitly fine-tuned on the target domain.
Lay Summary: Traditional time-series forecasting models adhered to the traditional supervised learning framework of training on task-specific data before forecasting for that task. Recently, however, time-series foundation models, which are pretrained on a large set of time-series across multiple domains, have shown strong zero-shot forecasting performance. Despite this success, existing time-series foundation models lack some of the desirable features of LLMs with respect to in-context learning: the zero-shot performance of an LLM can be greatly improved at inference time by using its context window to provide additional instructional prompts with the input. We design a pretrained foundation model that can be prompted (at inference time) with multiple time-series examples to forecast a target time-series into the future. Our foundation model is specifically trained to utilize examples from multiple related time-series in its context window to help it adapt to the specific distribution of the target domain at inference time. We show that in-context examples can boost the performance of a foundation model on a number of popular forecasting benchmarks relative to a large number of baselines. Interestingly, our in-context fine-tuning approach even matches the performance of a foundation model that is explicitly fine-tuned on the target domain.
Primary Area: Deep Learning->Sequential Models, Time series
Keywords: Time-series forecasting, Foundation models, In-context learning
Submission Number: 12653
Loading