Hallucination Detection and Mitigation with Diffusion in Multi-Variate Time-Series Foundation Models

ICLR 2026 Conference Submission20333 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multivariate Time-Series, Foundation Model, Hallucination, Diffusion Model
Abstract: Foundation models for natural language processing have many coherent definitions of hallucination and methods for its detection and mitigation. However, analogous definitions and methods do not exist for multi-variate time-series (MVTS) foundation models. We propose new definitions for MVTS hallucination, along with new detection and mitigation methods using a diffusion model to estimate hallucination levels. We derive relational datasets from popular time-series datasets to benchmark these relational hallucination levels. Using these definitions and models, we find that open-source pre-trained MVTS imputation foundation models relationally hallucinate on average up to 59.5\% as much as a weak baseline. The proposed mitigation method reduces this by up to 47.7\% for these models. The definition and methods may improve adoption and safe usage of MVTS foundation models.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 20333
Loading