Keywords: domain generalization, regression, meta-learning
TL;DR: A margin-aware meta learning method for unseen domain regression
Abstract: In the context of classification, domain generalization (DG) aims to predict the labels of unseen target-domain data using only labeled source-domain data, where the source and target domains usually share the same label set. However, in the context of regression, DG is not well studied in the literature, and the main reason is that the ranges of response variables in two domains are often different, even disjoint under some extreme conditions. In this paper, we systematically investigate domain generalization in the regression setting and propose a weighted meta-learning strategy to obtain optimal initialization across domains to tackle the challenge. Unlike classification, the labels (responding values) in regression naturally have ordinal relatedness. The relatedness brings a core challenge in meta-learning for regression: the hard meta-tasks with less ordinal relatedness are under-sampled from training domains. To further address the hard meta-tasks, we adopt the feature discrepancy to calculate the discrepancy between any two domains and take the discrepancy as the importance of meta-tasks in the meta-learning framework. Extensive regression experiments on the standard benchmark DomainBed demonstrate the superiority of the proposed method.
Supplementary Material: zip
Submission Number: 4880
Loading