Instance-dependent Approximation Guarantees for Lipschitz Approximators, Application to Scientific Machine Learning

TMLR Paper4160 Authors

07 Feb 2025 (modified: 28 Apr 2025)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Despite widespread adoption, Machine Learning models remain data-driven and lack exploitable theoretical guarantees on their approximation error. This limitation hinders their use for critical applications. In this paper, we show how to leverage the Lipschitz property for Lipschitz approximations, i.e., ML models that are Lipschitz continuous, to establish strict post-training –- instance dependent -- generalization error bounds given a set of validation points. We focus on the test case domain of ML for scientific computing called Scientific Machine Learning (SciML), where ML models are increasingly used but miss the theoretical approximation guarantees of classical scientific computing simulation schemes. We first show how to derive error bounds using Voronoï diagrams for a Lipschitz approximator trained to learn a $K$-Lipschitz function by taking advantage of the mesh-like structure of learning points. Second, we cast upper bounding as an optimization problem and use certified Deterministic Optimistic Optimization (introduced in Bachoc et al. 2021) and certified Voronoï Optimistic Optimization (that we design based on the non-certified version in Kim et al. 2020), to achieve tighter error bounds. The code is made available at [https://anonymous.4open.science/r/lipschitz_bounds_doo-7FDF](https://anonymous.4open.science/r/lipschitz_bounds_doo-7FDF]).
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: Andres R Masegosa
Submission Number: 4160
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview