ReDiTT: Retrieval Augmented Conditional Diffusion Transformers for Asynchronous Time Series

Published: 01 Mar 2026, Last Modified: 01 Mar 2026ICLR 2026 TSALM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Asynchronous Time Series, Conditional Diffusion Transformer, Deep Learning
TL;DR: We introduce ReDiTT, a retrieval-augmented latent diffusion transformer for asynchronous time series forecasting.
Abstract: We present a diffusion based model for asynchronous time series prediction, where the goal is to predict the next inter event time and event type. To address the inherent uncertainty of future events, we introduce ReDiTT, a retrieval augmented conditional diffusion transformer that operates in latent space. ReDiTT retrieves structurally similar latent sequences from a memory bank during both training and inference and incorporates them as reference conditions through cross attention. This retrieval based conditioning allows the model to attend to relevant temporal dynamics and provides global structural guidance for generation. As a result, ReDiTT stabilizes long horizon forecasting and improves sample diversity. Experiments on seven real world datasets demonstrate state of the art performance on next event prediction and long horizon forecasting.
Track: Research Track (max 4 pages)
Submission Number: 16
Loading