TS-Haystack: A Multi-Scale Retrieval Benchmark for Time Series Language Models

Published: 01 Mar 2026, Last Modified: 04 Apr 2026ICLR 2026 TSALM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Presentation Attendance: Yes, we will present in-person
Keywords: time-series language models, needle-in-a-haystack, multimodal reasoning, capture24, time series foundation models, event localization, anomaly detection, temporal grounding, temporal compression, flamingo, time series QA, activity recognition, latent compression, perceiver resampler
TL;DR: We introduce TS-Haystack, a long-context retrieval benchmark for TSLMs. We evaluate TSLM in long contexts, revealing a task-dependent performance effect of latent compression architectures.
Abstract: Time Series Language Models (TSLMs) are emerging as unified models for reasoning over continuous signals in natural language. However, long-context retrieval remains a major limitation: existing models are typically trained and evaluated on short sequences, while real-world time-series sensor streams can span millions of datapoints. This mismatch requires precise temporal localization under strict computational constraints, a regime that is not captured by current benchmarks. We introduce TS-Haystack, a long-context temporal retrieval benchmark comprising ten task types across four categories: direct retrieval, temporal reasoning, multi-step reasoning and contextual anomaly. The benchmark uses controlled needle insertion by embedding short activity bouts into longer longitudinal accelerometer recordings, enabling systematic evaluation across context lengths ranging from seconds to 2 hours per sample. We hypothesize that existing TSLM time series encoders overlook temporal granularity as context length increases, creating a task-dependent effect: compression aids classification but impairs retrieval of localized events. Across multiple model and encoding strategies, we observe a consistent divergence between classification and retrieval behavior. Learned latent compression preserves or improves classification accuracy at compression ratios up to 176$\times$, but retrieval performance degrades with context length, incurring in the loss of temporally localized information. These results highlight the importance of architectural designs that decouple sequence length from computational complexity while preserving temporal fidelity.
Track: Research Track (max 4 pages)
Submission Number: 69
Loading