Outlier Robust Training of Machine Learning Models

TMLR Paper3691 Authors

15 Nov 2024 (modified: 06 Dec 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Robust training of machine learning models in the presence of outliers has garnered attention across various domains. Use of robust losses is a popular approach and has been known to mitigate the effects of outliers. We bring to light two literatures that have diverged in their ways of designing robust losses: one using robust estimation, which is popular in robot and computer vision, and another using risk-minimization framework, which is popular in deep learning. We first show that a simple modification of the Black-Rangarajan duality provides a unifying view. The modified duality brings out a definition of a robust loss kernel $\sigma$ that is satisfied by robust losses in both the literatures. Secondly, using the modified duality, we propose two classes of algorithms, namely, graduated non-convexity and adaptive training algorithms. These algorithms are augmented with a novel parameter update rule by interpreting the weights in the modified duality as inlier probabilities. Thirdly, we investigate convergence of the two algorithms to the outlier-free optima, i.e., the ground-truth. Considering arbitrary outliers (i.e., with no distributional assumption on the outliers), we show that the use of robust loss kernel σ increases the region of convergence. We experimentally show the efficacy of our algorithms on regression, classification, and neural scene reconstruction.
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=92Rlt7QRVB
Changes Since Last Submission: We had a desk rejection due to a changed paper font: "Modified font from template default; please revisit and resubmit." This was an oversight on our part. We have made the changes. Thank you, Rajat Talak
Assigned Action Editor: ~Huaxiu_Yao1
Submission Number: 3691
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview