Abstract: In single domain generalization, the model is trained on a single source domain and requires being generalized to multiple unseen target domains. However, the presence of domain discrepancies poses a significant threat to this goal. A straightforward solution is to extract class-specific features that are robust to domain discrepancies. Existing methods typically learn class-specific features with the assistance of domain-specific features by enlarging the gap between them. Nevertheless, the absence of domain supervision limits the extraction of domain-specific features, leading to potential misidentification of class-specific features. To address this issue, we propose Label-expanded Feature Debiasing (LeFD), a novel method that learns class-specific features in a more robust manner. Technically, LeFD introduces domain supervision and explicitly extracts integrated domain and class features through label expansion. Subsequently, a rationale alignment module is employed to eliminate domain information from the integrated domain and class features, thereby obtaining class-specific features. Extensive experimental results on multiple benchmark datasets indicate the superiority of the proposed LeFD compared to other state-of-the-art methods.
Loading