Robust Hybrid Learning With Expert Augmentation

Published: 22 Feb 2023, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Hybrid modelling reduces the misspecification of expert models by combining them with machine learning (ML) components learned from data. Similarly to many ML algorithms, hybrid model performance guarantees are limited to the training distribution. Leveraging the insight that the expert model is usually valid even outside the training domain, we overcome this limitation by introducing a hybrid data augmentation strategy termed \textit{expert augmentation}. Based on a probabilistic formalization of hybrid modelling, we demonstrate that expert augmentation, which can be incorporated into existing hybrid systems, improves generalization. We empirically validate the expert augmentation on three controlled experiments modelling dynamical systems with ordinary and partial differential equations. Finally, we assess the potential real-world applicability of expert augmentation on a dataset of a real double pendulum.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We have addressed all clarity issues raised by different reviewers. We have also added an additional experiment that shows that expert augmentation robustly improves the shifted-test performance. Corrected small typos. We have added results on the double pendulum with HVAE. We have also added more nuance to our discussions on the applicability of expert augmentations.
Assigned Action Editor: ~Jasper_Snoek1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 470