PDED: Revitalize physics laws submerged in data information for Traffic State Estimation

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: zip
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Physics-informed deep learning, Traffic state estimation, Knowledge distillation, Ensemble learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Traditional physics-informed deep learning combines the data-driven methods with the model-based methods by incorporating physics loss as a constraint in total loss function in general, which aims to enforce the neural network to behave according to the physics property. However, this simple integration makes physical knowledge submerged in data information since data loss and physics loss could have large magnitude differences, conflicting directions of the gradients, and varying convergence rates so that the physics law may not work as expected and inhibits the model from working effectively furthermore, especially for traffic state estimation (TSE). To alleviate these issues, we propose a Physical knowledge combined Data information neural network with Ensemble Distillation framework (PDED) to first disentangle the data-driven model and physics-based model, and then reassemble them to take advantages of label information and physics property. Practically, we separately train data-driven model based on true labels and physics-based model according to physics laws. Then, we introduce the ensemble learning and knowledge distillation to assemble their representations of these two models for constructing a more competitive learnable online teacher model, which in turn distills knowledge to guide the update of them for learning richer knowledge to improve the performance of student models. Through extensive experiments on both synthetic dataset and real-world datasets, our model demonstrates better performance than the existing state-of-the-art methods.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3151
Loading