Keywords: linear dynamical systems, state estimation, system identification, expectation–maximization algorithm
Abstract: Due to the tractable analysis and control, linear dynamical systems (LDSs) provide a fundamental mathematical tool for time-series data modeling in various disciplines. Particularly, many LDSs have sparse system matrices because interactions among variables are limited or only a few significant relationships exist. However, available learning algorithms for LDSs lack the ability to learn system matrices with the sparsity constraint. To address this issue, we impose sparsity-promoting priors on system matrices and explore the expectation–maximization (EM) algorithm to give a maximum a posteriori (MAP) estimate of both hidden states and system matrices from noisy observations. In addition, we find that many learning algorithms based on the gradient descent method use an inappropriate derivative rule, because they neglect the inherent symmetry of noise covariance matrices. Here, we consider the derivative rule of structured matrices during the optimization process to guarantee their symmetry. Experimental results on simulation and real-world problems illustrate that the proposed algorithm significantly improves learning accuracy over classical ones.
Supplementary Material: zip
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10892
Loading