Beyond Statistical Changepoint Detection: Semantic Interpretation of Time Series via LLMs

Published: 01 Mar 2026, Last Modified: 01 Mar 2026ICLR 2026 TSALM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Time Series, LLMs, Statistical Theory
TL;DR: Changepoints are sparse linear spline interpolants of potentially nonlinear time series, and LLMs can unpack meaningful changes from these points
Abstract: Changepoint detection algorithms identify where structural breaks occur but are conventionally used under a one-to-one mapping between detected breaks and real-world events. We show this mapping assumption is undermined by a fundamental ambiguity: the confidence interval for a detected break widens as the slope jump shrinks, so a wide interval may indicate either a mild genuine break or an approximation artifact from fitting piecewise-linear segments to nonlinear dynamics. This ambiguity is not identifiable from the time series alone. Hence, we propose a different paradigm, treating the $\ell^0$ changepoint output as a sparse piecewise-linear representation whose slope transitions and confidence intervals serve as structured inputs for LLM semantic interpretation, grounded by in-context learning examples and external knowledge retrieval. The LLM classifies patterns into isolated structural breaks, coherent multi-changepoint structures, and nonlinear dynamic transitions. On two FRED economic time series, our framework achieves perfect recall against NBER recession dates while recovering semantic structures---such as grouping four ambiguous Volcker-era changepoints into one coherent event---that traditional methods detect but cannot interpret.
Track: Research Track (max 4 pages)
Submission Number: 110
Loading