TimeFK: Towards Time Series Forecasting via Treating LLMs as Fuzzy Key

07 Sept 2025 (modified: 17 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Time series forecasting, large language models, multi-modal, Gaussian fuzzy mapping, fuzzy-aware attention decoder
Abstract: Time series forecasting (TSF) aims to predict future values based on historical data. Recent advancements in large language models (LLMs), which integrate cross-modal information (time series data and textual prompts), have demonstrated remarkable performance in TSF tasks. However, significant gaps remain between LLM-based methods and deep learning approaches due to their inherent differences. To bridge this gap, we propose TimeFK, an innovative TSF framework that uses LLMs as ``fuzzy keys'' to activate forecasting capabilities. Specifically, we introduce a tri-branch multi-modal encoding scheme that combines numerical and linguistic representations: (1) a time series encoder generates precise but weak embeddings, (2) a statistical encoder captures robust yet entangled features, and (3) a background encoder learns dataset-related information that remains disentangled. The fusion of precise, robust, and disentangled representations improves prediction accuracy. To further mitigate noise from language prompts, we introduce a Gaussian fuzzy mapping mechanism that maps hidden representations from LLMs into a fuzzy set space, preserving semantic richness while reducing irrelevant noise. Additionally, we prevent entanglement by using fused cross-modal representations as keys and time series embeddings as values in a fuzzy-aware attention decoder, enabling query-based interactions for forecasting. Extensive experiments on seven real-world benchmark datasets demonstrate that TimeFK outperforms state-of-the-art methods, highlighting the effectiveness of integrating fuzzy reasoning with multi-modal time series analysis.
Primary Area: learning on time series and dynamical systems
Submission Number: 2765
Loading