Abstract: We introduce Temporal Variational Implicit Neural Representations (TV-INRs), a probabilistic framework for modeling irregular multivariate time series that enables efficient and accurate individualized imputation and forecasting. By integrating implicit neural representations with latent variable models, TV-INRs learn distributions over time-continuous generator functions conditioned on signal-specific covariates.
Unlike existing approaches that require extensive training, fine-tuning or meta-learning, our method achieves accurate individualized predictions through a single forward pass. Our experiments demonstrate that with a single TV-INRs instance, we can accurately solve diverse imputation and forecasting tasks, offering a computationally efficient and scalable solution for real-world applications.
TV-INRs performs particularly well in low-data regimes, where on several datasets it achieves substantially lower imputation error, including order-of-magnitude improvements.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: In response to the reviewer’s helpful feedback, we’ve made the following changes to our manuscript:
1. Corrected the SAITS ranking error on the Traffic dataset at L=2K
2. Statistical Reporting:
- Updated all tables so that the best results are in bold, and the second-best are underlined
- Statistical significance tests (Welch’s t-tests) have been moved to Appendix B4
3. Long-Horizon Forecasting results (High MSE at F=720)
- Explored the significance of poor MSE results in Section 4.1.2 and the limitations section (more details below)
- Added further details in the Appendix, including a new, detailed error distribution analysis (Appendix B.2) and representative visualizations of the forecasting errors (Figure 4)
4. New Failure Modes & Limitations Section (4.1.5)
Added a new section on model limitations, where we discuss lack of advantage in high-data regimes, long-horizon instability, and simple missingness modeling.
5. Complexity Analysis (4.1.3)
Expanded the section 4.1.3 to include the complexity analysis results from Appendix A.9, A.10, and A.11. We’ve also expanded the analysis in Appendix A.9 to include time and memory complexity comparison to the baselines.
6. Uncertainty-focused metrics
Reported new uncertainty-focused metrics (Negative Log-Likelihood, empirical 90% coverage evaluation) for the Electricity and HAR datasets in Appendix B.1 (Tables 26-27).
7. Clarifications on Experimental Protocol
Added details clarifying our methods and the datasets used:
- Added a concise step-by-step pipeline summary for clarity (Section 3.1)
- Clarified spatial encoding and masked attention usage (Section 3.2)
- DeepTime and TimeFlow horizon lengths are used for forecasting (e.g., H=512)
- Electricity/Traffic/Solar forecasting is per-series univariate
- Solar-10 L=10000 is aligned with TimeFlow protocols
- Added Laplace vs Gaussian likelihood ablation to(Appendix B.3)
8. Baseline Discussion Expansion
We have mentioned LSCD and TabPFN-TS as related baselines.
- Discussed foundation-scale models (e.g., TabPFN-TS).
- Clarified scope limitations and public code availability.
- Extended Section 2.2 and Conclusion to contextualize TV-INRs.
9. Multivariate Forecasting
We have added multivariate forecasting results in Appendix A.14
Assigned Action Editor: ~Michael_Minyi_Zhang1
Submission Number: 6821
Loading