DCTS: Fusing Discrete and Continuous Information for Time Series Forecasting

ICLR 2026 Conference Submission16347 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Deep Learning, Time Series Forecasting
TL;DR: Mixing discrete information to continuous model
Abstract: In time series analysis, data is usually treated as a set of continuous values. Conventional methods do all the computations, from inputs to outputs, in continuous form. While this continuous representations are highly expressive, it can also pay too much attention to fine-grained details. This risks introducing noise and overlooking critical information. In contrast, discrete representations can assign a single code to each temporal pattern. In this way, key patterns underlain in the data are extracted more effectively, while some informative details are probably filtered out along with noises. In order to combine the advantages of both, we propose fusing discrete and continuous information for time series forecasting (DCTS), that incorporates both continuous and discrete approaches, it fuses the expressive power of continuous encoding and pattern-abstracting ability of discrete encoding. It uses a codebook learned via vector quantization to extract discrete encoding from the time series and then fuses it with the continuous encoding. In doing so, the model can benefit from the strengths of both continuous and discrete representations. Additionally, we use multiple codebooks to encode the time series. A single code can hardly cover the entire feature space of a time series. In contrast, multiple discrete values can be combined, exponentially expanding the encoding space and achieving much stronger expressive power. We evaluated our proposed method on multiple real-world datasets and achieved the best performance compared to the baseline methods.
Supplementary Material: zip
Primary Area: learning on time series and dynamical systems
Submission Number: 16347
Loading