Keywords: Natural Language to Time Series, Time Series Forecasting, Foundation Models, Cross-Modal Learning, Ensemble Models, LLaMA, Benchmarking, Zero-shot Forecasting, Time Series Foundation Models
TL;DR: A cross-modal framework that maps natural language into time series forecasts via LLaMA prompting and ensemble calibration, evaluated on the new NL2TS-675 dataset.
Abstract: We introduce Zero-to-Forecast, a cross-modal AI framework that converts natural language descriptions into numerical time series predictions. Our approach unifies large language model reasoning with domain- and pattern-specific predictors and post-hoc calibration (domain-aware smoothing, monotonic constraints), yielding robust, realistic sequences from free-form text. On the NL2TS-675 benchmark spanning six domains, our advanced domain-optimized ensemble achieves overall MAE 16.06 with all domains < 25 MAE (Finance 18.29, Healthcare 10.64, Weather 15.04, IoT 16.21, Technology 15.11, Retail 16.91), substantially improving over strong baselines. We release code, artifacts, and a live interactive demo, positioning natural language-driven forecasting as a practical paradigm for zero-data scenario planning.
Submission Number: 34
Loading