Keywords: Time series, Controllable Representation Learning
TL;DR: Controllable Representation Learning for Time-series Analysis
Abstract: Representation learning for time series typically relies on reliable anchors: smooth input signals or dense supervision that constrain latent dynamics. When both are degraded due to noise, missing values, or irregular sampling, hidden states will drift and standard methods will collapse. To tackle this problem, we propose a conceptual shift: treating representation learning itself as a control problem. Our framework, Neural Feedback Control (NFC), actively regulates latent trajectories using confidence-weighted pseudo-observations and pseudo-labels, combining pseudo data-based controllers with continuous-time dynamics and residual-based feedback. This design transforms latent space evolution from passive inference into a controllable process. In contrast to Neural ODEs/CDEs, which model latent dynamics without stability guarantees, and predictive coding approaches that propagate errors without explicit contraction control, NFC provides a feedback-driven mechanism with provable stability under partial observability. Theoretically, we prove that under mild conditions, NFC guarantees exponential decay of output error to a bounded region, providing a certified stability guarantee. Every module in NFC (pseudo-signal generation, confidence weighting, and feedback penalties) plays a role in a single closed-loop control system,
transforming representation learning into active regulation rather than passive inference. Empirically, NFC achieves substantial robustness gains: over 50\% lower forecasting error on power load datasets and more than 10\% higher accuracy on human activity dataset with 30\% missing data. These results highlight task-aware latent control as an effective approach for stabilizing representation learning when conventional anchors fail.
Primary Area: learning on time series and dynamical systems
Submission Number: 23708
Loading