Distributionally Robust Group Backwards CompatibilityDownload PDF

Published: 02 Dec 2021, Last Modified: 08 Sept 2024NeurIPS 2021 Workshop DistShift PosterReaders: Everyone
Keywords: distributional robustness, backwards compatibility, machine learning, fairness
TL;DR: A study on how to improve backward compatibility using distributional robustness
Abstract: Machine learning models are updated as new data is acquired or new architectures are developed. These updates usually increase model performance, but may introduce backward compatibility errors, where individual users or groups of users see their performance on the updated model adversely affected. This problem can also be present when training datasets do not accurately reflect overall population demographics, with some groups having overall lower participation in the data collection process, posing a significant fairness concern. We analyze how ideas from distributional robustness and minimax fairness can aid backward compatibility in this scenario, and propose two methods to directly address this issue. Our theoretical analysis is backed by experimental results on CIFAR-10, CelebA, and Waterbirds, three standard image classification datasets.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/distributionally-robust-group-backwards/code)
1 Reply

Loading