Online Statistical Inference of Constrained Stochastic Optimization via Random Scaling

Published: 28 Nov 2025, Last Modified: 30 Nov 2025NeurIPS 2025 Workshop MLxOREveryoneRevisionsBibTeXCC BY 4.0
Keywords: Stochastic Optimization; online statistical inference; constrained optimization; stochastic sequential quadratic programming
TL;DR: Purpose an efficient online statistical inference procedure for constrained stochastic optimization problems.
Abstract: Constrained stochastic nonlinear optimization problems have attracted significant attention for their ability to model complex machine learning phenomena. As datasets continue to grow, online inference methods have become crucial for enabling real-time decision-making without the need to store historical data. In this work, we develop an online inference procedure for constrained stochastic optimization by leveraging a method called Adaptive Inexact Stochastic Sequential Quadratic Programming (AI-SSQP), which can be considered as a generalization of (sketched) Newton methods to constrained problems. We first establish the asymptotic normality of averaged AI-SSQP iterates. Then we propose a random scaling method that constructs parameter-free pivotal statistics through appropriate normalization. Our online inference approach offers two key advantages: (i) it enables the construction of asymptotically valid and statistically efficient confidence intervals, while existing work based on last iterates are less efficient and rely on a covariance estimator that is inconsistent; and (ii) it is matrix-free, i.e., the computation involves only primal-dual iterates without any matrix inversions, making its computational cost match that of first-order methods for unconstrained problems.
Submission Number: 7
Loading