Group Robustness via Adaptive Class-Specific Scaling

ICML 2023 Workshop SCIS Submission11 Authors

Published: 20 Jun 2023, Last Modified: 28 Jul 2023SCIS 2023 PosterEveryoneRevisions
Keywords: group robustness, debiasing
TL;DR: Class-specific scaling can identify and control the inherent trade-off in existing debiasing method effectively.
Abstract: Group distributionally robust optimization, which aims to improve robust accuracies such as worst-group or unbiased accuracy, is one of the mainstream algorithms to mitigate spurious correlation and handle dataset bias. Existing approaches have apparently improved robust accuracy, but in fact these performance gains mainly come from trade-offs at the expense of average accuracy. To address the challenges, we first propose a simple class-specific scaling strategy to control the trade-off between robust and average accuracies flexibly and efficiently, which is directly applicable to existing debiasing algorithms without additional training; it reveals that a naive ERM baseline matches or even outperforms the recent debiasing approaches by only adopting the class-specific scaling. Then, we employ this technique to evaluate the performance of existing algorithms in a comprehensive manner by introducing a novel unified metric that summarizes the trade-off between the two accuracies as a scalar value. We also develop an instance-wise adaptive scaling technique for overcoming the trade-off and improving the performance even further in terms of both accuracies. We perform experiments on the datasets in computer vision and natural language processing domains and verify the effectiveness of the proposed frameworks. By considering the inherent trade-off, our frameworks provide meaningful insights in existing robust approaches beyond comparing only the robust accuracy.
Submission Number: 11
Loading