Keywords: Stochastic Optimization, Statistics, Non-Convex Optimization, Dynamical Systems, Probability Theory
TL;DR: Asymptotic statistical results for estimators built off of stochastic approximation algorithms on a subset of non-convex objectives with hidden structure under weakly dependent data.
Abstract: Stochastic gradient methods are increasingly employed in statistical inference tasks, such as parameter and interval estimation. Yet, much of the current theoretical framework mainly revolves around scenarios with i.i.d. observations or strongly convex objectives, bypassing more complex models. To address this gap, our paper delves into the challenges posed by correlated stream data and the inherent intricacies of the non-convex landscapes in neural network applications.
In this context, we present SHADE (Stochastic Hidden Averaging Data Estimator), a novel mini-batch gradient-based estimator. We further substantiate its asymptotic normality through a tailored central limit theorem designed explicitly for its average scheme. From a technical perspective, our analysis integrates recent advancements in composite (hidden) convex optimization, stochastic processes, and dynamical systems.
Submission Number: 123
Loading