Keywords: fairness, subgroups, data balancing, latent representations, generalisation, data allocation
TL;DR: We show that a fine-tuned model’s sensitivity to subgroup training data balance can be predicted from how much those subgroups are separated in the pre-trained model's latent representations.
Abstract: Unequal representation of demographic groups in training data poses challenges to model generalisation across populations. Standard practice assumes that balancing subgroup representation optimises performance. However, recent empirical results contradict this assumption: in some cases, imbalanced data distributions actually improve subgroup performance, while in others subgroup performance remains unaffected by the absence of an entire subgroup during training. We conduct a systematic study of subgroup allocation across four vision and language models, varying training data composition to characterise the sensitivity of subgroup performance to data balance. We propose the latent separation hypothesis, which states that a partially fine-tuned model’s dependence on subgroup representation is determined by the degree of separation between subgroup latent space representation of the pre-trained model. We formalise this hypothesis, provide theoretical analysis, and validate it empirically. Finally, we present a practical application to foundation model fine-tuning, demonstrating that quantitative analysis of latent subgroup separation can inform data collection and balancing decisions.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 21083
Loading