On the Unreasonable Effectiveness of Last-layer Retraining

TMLR Paper6681 Authors

27 Nov 2025 (modified: 01 Dec 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Last-layer retraining (LLR) methods --- wherein the last layer of a neural network is reinitialized and retrained on a held-out set following ERM training --- have garnered interest as an efficient approach to rectify dependence on spurious correlations and improve performance on minority groups. Surprisingly, LLR has been found to improve worst-group accuracy even when the held-out set is an imbalanced subset of the training set. We initially hypothesize that this ``unreasonable effectiveness'' of LLR is explained by its ability to mitigate neural collapse through the held-out set, resulting in the implicit bias of gradient descent benefiting robustness. Our empirical investigation does not support this hypothesis. Instead, we present strong evidence for an alternative hypothesis: that the success of LLR is primarily due to better group balance in the held-out set. We conclude by showing how the recent algorithms CB-LLR and AFR perform implicit group-balancing to elicit a robustness improvement.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Hongyang_R._Zhang1
Submission Number: 6681
Loading