When can Regression-Adjusted Control Variate Help? Rare Events, Sobolev Embedding and Minimax Optimality

Published: 21 Sept 2023, Last Modified: 20 Dec 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Information-theoretic Lower Bounds, Sobolev Embedding Theorem, Quadrature Rule
TL;DR: ML-based control variates improve Monte-Carlo rate with sufficient smoothness. Truncated Monte-Carlo is optimal without sufficient smoothness, while ML-based control variates can't boost the rate due to rare and extreme events.
Abstract: This paper studies the use of a machine learning-based estimator as a control variate for mitigating the variance of Monte Carlo sampling. Specifically, we seek to uncover the key factors that influence the efficiency of control variates in reducing variance. We examine a prototype estimation problem that involves simulating the moments of a Sobolev function based on observations obtained from (random) quadrature nodes. Firstly, we establish an information-theoretic lower bound for the problem. We then study a specific quadrature rule that employs a nonparametric regression-adjusted control variate to reduce the variance of the Monte Carlo simulation. We demonstrate that this kind of quadrature rule can improve the Monte Carlo rate and achieve the minimax optimal rate under a sufficient smoothness assumption. Due to the Sobolev Embedding Theorem, the sufficient smoothness assumption eliminates the existence of rare and extreme events. Finally, we show that, in the presence of rare and extreme events, a truncated version of the Monte Carlo algorithm can achieve the minimax optimal rate while the control variate cannot improve the convergence rate.
Supplementary Material: pdf
Submission Number: 514
Loading