Gradient-enhanced deep Gaussian processes for multifidelity modeling

Published: 24 Feb 2024, Last Modified: 02 Oct 2024CoRR abs/2402.16059EveryoneCC BY 4.0
Abstract: Multifidelity models integrate data from multiple sources to produce a single approximator for the underlying process. Dense low-fidelity samples are used to reduce interpolation error, while sparse high-fidelity samples are used to compensate for bias or noise in the low-fidelity samples. Deep Gaussian processes (GPs) are attractive for multifidelity modelling as they are non-parametric, ro- bust to overfitting, perform well for small datasets, and, critically, can capture nonlinear and input- dependent relationships between data of different fidelities. Many datasets naturally contain gra- dient data, especially when they are generated by computational models that are compatible with automatic differentiation or have adjoint solutions. Principally, this work extends deep GPs to incor- porate gradient data. We demonstrate this method on an analytical test problem and a realistic partial differential equation problem, where we predict the aerodynamic coefficients of a hypersonic flight vehicle over a range of flight conditions and geometries. In both examples, the gradient-enhanced deep GP outperforms a gradient-enhanced linear GP model and their non-gradient-enhanced coun- terparts.
Loading