Uncertainty-aware Fine-tuning on Time Series Foundation Model for Anomaly Detection

27 Sept 2024 (modified: 13 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Time Series Foundation Model, Anomaly Detection, Fine-tuning
TL;DR: We propose ULoRA-MoE, an uncertainty-aware fine-tuning approach on time series foundation model for anomaly detection.
Abstract: Time-series anomaly detection is a crucial task in various real-world domains, geared towards identifying data observations that significantly deviate from the norm. Although time-series foundation models have shown promising results across multiple tasks, their effectiveness in anomaly detection is often inferior. This is due to their unsupervised learning paradigm being compromised by anomaly contamination in the training data. In addition, the existing approaches lack the capability to capture boundries of multiple types of normal and abnormal patterns. To overcome these challenges, we propose ULoRA-MoE, a general uncertainty-aware fine-tuning approach using resource-efficient Mixture-of-Expert (MoE) module based on LoRA. This proposed approach can enhance the fine-tuning performance across a broad spectrum of time series foundation models for anomaly detection. Each expert module of MoE can help learn different types of anomalies. Furthermore, we design the uncertainty-aware router of MoE using Gumbel-Softmax distribution for categorical sampling to capture the epistemic uncertainty. Given the estimated uncertainty, we propose a calibrated anomaly score function to mitigate the detrimental effects of anomaly contamination. We conducted extensive experiments on two general types of time series foundation models. The results demonstrate that our approach significantly improves the model performance compared to existing fine-tuning approaches. Furthermore, ULoRA-MoE shows competitive performance compared to a comprehensive set of non-learning, classical learning, and deep learning (DL) based time-series anomaly detection baselines across 8 real-world benchmarks.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9397
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview