Can Large Language Models be Anomaly Detectors for Time Series?

Published: 01 Jan 2024, Last Modified: 23 May 2025DSAA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The flexible nature of large language models allows them to be used for diverse applications. Recent studies have showcased numerous abilities of these models, including performing time series forecasting. In this paper, we present a novel study of large language models used for the challenging task of time series anomaly detection. This problem entails two novel aspects for LLMs specifically: first, the model needs to be able to identify part of an input sequence (or multiple parts) as anomalous; and second, the model needs to work with time series data rather than with text input. We introduce SIGLLM, a framework for time series anomaly detection using large language models. Our framework includes a time-series-to-text conversion module, as well as end-to-end pipelines that prompt language models to perform time series anomaly detection. We investigate two paradigms for testing the abilities of large language models to perform the detection task. First, we present a prompt-based detection method that directly asks a language model to indicate which elements of the input are anomalies. Second, we leverage the forecasting capability of a large language model to guide the anomaly detection process. We evaluated our framework on 11 datasets spanning various sources and 10 pipelines. We show that the forecasting method significantly outperformed the prompting method in all 11 datasets with respect to the F1 score. Moreover, while large language models are capable of finding anomalies, state-of-the-art deep learning models are still superior in performance, achieving 30% improvement.
Loading