everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Simulators are vital in science and engineering, as they faithfully model the influence of design parameters on real-world observations. A common problem is leveraging the simulator to optimize the design parameters to minimize a desired objective function. Since simulators are often non-differentiable blackboxes and each simulation incurs significant compute time, gradient-based optimization techniques can often be intractable or, in some cases, impossible. Furthermore, in many experiment design settings, practitioners are required to solve sets of closely related optimization problems. Thus, starting the optimization from scratch each time might be inefficient if the forward simulation model is expensive to evaluate. To address these challenges, this paper introduces a novel method for solving classes of similar black-box optimization problems by learning an active learning policy that guides the training of a differentiable surrogate and then uses that surrogate's gradients to optimize the simulation parameters with gradient descent. After training the policy, the cost for downstream optimization of problems involving black-box simulators is amortized and we require up to $\sim$90% fewer expensive simulator calls compared to baselines such as local surrogate-based approaches, numerical optimization, and Bayesian methods.