TS2Code: Enhancing Time Series Understanding via Learning to Code

17 Sept 2025 (modified: 23 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Time Series, Time series forecasting, Vision language models, reinforcement learning, time series understanding
TL;DR: Code can be used to optimize the time series representation space in VLMs and also serve as a medium for time series forecasting.
Abstract: Despite improvements in multimodal reasoning and code generation, language models still fail to perform well on time series forecasting and reasoning. To address this, we propose TS2Code, a novel multi-modal training objective for learning multi-modal representation spaces for time series data. TS2Code works by training vision-language models to convert time series to code, which reconstructs the input time series when run. This reconstruction serves as a verifiable reward, which lets us use reinforcement learning (RL) to train models to write better code. In extensive experiments, we find that training models to convert time series into code improves their zero-shot performance on time series forecasting, anomaly detection, and reasoning, with the gains increasing with model size. In addition, by controlling code structure through RL, we further find that rewarding code styles, such as minimal digit usage, also helps improve performance. Our implementation will be posted publicly, and is available to reviewers anonymously at https://anonymous.4open.science/r/TS2Code-830E.
Primary Area: learning on time series and dynamical systems
Submission Number: 9956
Loading