Multi-Domain Long-Tailed Learning by Augmenting Disentangled Representations

Published: 06 Oct 2023, Last Modified: 06 Oct 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: There is an inescapable long-tailed class-imbalance issue in many real-world classification problems. Current methods for addressing this problem only consider scenarios where all examples come from the same distribution. However, in many cases, there are multiple domains with distinct class imbalance. We study this multi-domain long-tailed learning problem and aim to produce a model that generalizes well across all classes and domains. Towards that goal, we introduce TALLY, a method that addresses this multi-domain long-tailed learning problem. Built upon a proposed selective balanced sampling strategy, TALLY achieves this by mixing the semantic representation of one example with the domain-associated nuisances of another, producing a new representation for use as data augmentation. To improve the disentanglement of semantic representations, TALLY further utilizes a domain-invariant class prototype that averages out domain-specific effects. We evaluate TALLY on several benchmarks and real-world datasets and find that it consistently outperforms other state-of-the-art methods in both subpopulation and domain shift.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Zhe_Gan1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1316
Loading