Keywords: representation learning, physics identification, orthogonality
Abstract: Accurately identifying the underlying physical laws in complex systems is vital for effective control and interpretation. However, many systems are governed by a combination of known physical principles and unobservable or poorly understood components. Traditional model-based methods like Kalman filters and state-space models often rely on oversimplified assumptions, while modern data-driven approaches, such as physics-informed neural networks (PINNs), can suffer from overfitting or lack theoretical guarantees in recovering true physical dynamics. We propose the Orthogonal Deep Neural Network (ODNN) architecture to address these limitations. ODNN disentangles known physical components from unobservable or poorly understood components by imposing orthogonal constraints on the deep neural network. Unlike additive regularization methods, ODNN converts the physical constraints directly into the network structure, ensuring that the DNN focuses on capturing the unknown or complex dynamics without overfitting. This novel approach leverages both explicit orthogonality (e.g., zero inner product) and implicit orthogonality (e.g., contrasting convexity, periodicity, or symmetry) between physical laws and unknown components. Theoretically, we prove that ODNN provides strong guarantees for accurate system identification under mild orthogonality assumptions, building on the universal approximation theorem. Empirically, ODNN is evaluated across eight synthetic and real-world datasets, showcasing its ability to recover governing physical equations with high accuracy and interpretability. Our results demonstrate that ODNN offers significant advantages in terms of generalizability and robustness, making it a valuable framework for physics-based model identification in complex systems.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4095
Loading