Self-tuned Robust Mean Estimators

Published: 07 May 2025, Last Modified: 13 Jun 2025UAI 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: robustness, heavy-tailed distribution, self-tuning, gradient descent.
TL;DR: We propose self-tuned robust mean estimators in the presence of heavy-tailed noise.
Abstract: This paper introduces an empirical risk minimization based approach with concomitant scaling, which eliminates the need for tuning a robustification parameter in the presence of heavy-tailed data. This method leverages a new loss function that concurrently optimizes both the mean and robustification parameters. Through this dual-parameter optimization, the robustification parameter automatically adjusts to the unknown data variance, rendering the method self-tuning. Our approach surpasses previous models in both computational and asymptotic efficiency. Notably, it avoids the reliance on cross-validation or Lepski's method for tuning the robustification parameter, and the variance of our estimator attains the Cram\'{e}r-Rao lower bound, demonstrating optimal efficiency. In essence, our approach demonstrates optimal performance across both finite-sample and large-sample scenarios, a feature we describe as \textit{algorithmic adaptivity to both asymptotic and finite-sample regimes}. Numerical studies lend strong support to our methodology. The code is available at \url{https://github.com/NeXAIS/automean}.
Latex Source Code: zip
Code Link: https://github.com/NeXAIS/automean
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission334/Authors, auai.org/UAI/2025/Conference/Submission334/Reproducibility_Reviewers
Submission Number: 334
Loading