Statistical Learning and Inverse Problems: A Stochastic Gradient ApproachDownload PDF

Published: 31 Oct 2022, Last Modified: 05 May 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: Statistical Learning, Inverse Problems, Stochastic Gradient Descent
TL;DR: An algorithm based on stochastic gradient descent for solving linear Inverse Problems under a statistical learning framework.
Abstract: Inverse problems are paramount in Science and Engineering. In this paper, we consider the setup of Statistical Inverse Problem (SIP) and demonstrate how Stochastic Gradient Descent (SGD) algorithms can be used to solve linear SIP. We provide consistency and finite sample bounds for the excess risk. We also propose a modification for the SGD algorithm where we leverage machine learning methods to smooth the stochastic gradients and improve empirical performance. We exemplify the algorithm in a setting of great interest nowadays: the Functional Linear Regression model. In this case we consider a synthetic data example and a classification problem for predicting the main activity of bitcoin addresses based on their balances.
Supplementary Material: zip
9 Replies

Loading