Align and Fine-Tune: Enhancing LLMs for Time-Series Forecasting

Published: 10 Oct 2024, Last Modified: 26 Nov 2024NeurIPS 2024 TSALM WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Representation Learning, Multivariate Time-Series, Large Language Models, Time-Series Forecasting
TL;DR: We introduce LLM4TS, a framework that adapts pre-trained Large Language Models for multivariate time-series forecasting using a two-stage fine-tuning process, achieving superior performance in both full-shot and few-shot scenarios.
Abstract: Multivariate time-series forecasting is vital in fields like economic planning and weather prediction, but deep models often require large datasets, limiting their practicality. Pre-trained Large Language Models (LLMs) have been adapted for time-series tasks, but challenges persist due to differences between time-series and linguistic data, and the need for multi-scale temporal processing. To address these challenges, we introduce LLM4TS, a framework that leverages LLMs for time-series forecasting through a two-stage fine-tuning process: *time-series alignment* to adapt LLMs to time-series data and *forecasting fine-tuning* for specific tasks. A novel two-level aggregation method integrates multi-scale temporal data within LLMs. Experiments show that LLM4TS outperforms state-of-the-art methods, excelling in both full-shot and few-shot scenarios. Comparisons with other unsupervised approaches highlight LLM4TS's superior representation learning.
Submission Number: 5
Loading