Sparse-Smooth Decomposition for Nonlinear Industrial Time Series Forecasting

18 Sept 2025 (modified: 12 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Industrial Time Series Forecasting, Sparse Learning, Temporal Regularization, Interpretable Machine Learning, Nonlinear System Identification
Abstract: Industrial time series forecasting faces unique challenges: hundreds of correlated sensors, complex nonlinear dynamics, and the critical need for interpretable models that engineers can trust. We introduce nonlinear causal sparse-smooth network, a framework that decomposes high-dimensional industrial forecasting into interpretable sparse-smooth feature extraction followed by nonlinear prediction. Unlike black-box deep learning approaches that use all sensors indiscriminately, our method automatically identifies critical sensor subsets while learning smooth temporal filters that reflect physical process dynamics. We cast this as a structured optimization problem with sparsity penalties for sensor selection and smoothness regularization for temporal patterns, unified within an identifiable Wiener model architecture. Theoretically, we prove convergence guarantees, establish sensor selection consistency, and derive generalization bounds that explicitly account for the interplay between sparsity, smoothness, and nonlinearity. On a challenging industrial refinery benchmark, our structured approach achieves a 25.2% lower error rate than state-of-the-art Transformer models, while simultaneously identifying a sparse subset of critical sensors and their interpretable dynamic modes. Our work demonstrates that incorporating strong, domain-aware inductive biases into a structured architecture offers a powerful alternative to monolithic black-box models for real-world industrial forecasting.
Primary Area: causal reasoning
Submission Number: 12294
Loading