Domain Generalization in RegressionDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: domain generalization, regression, meta-learning
TL;DR: We propose a new domain generalization setting in regression scenario and a weighted meta-learning solution.
Abstract: In the context of classification, \textit{domain generalization} (DG) aims to predict the labels of unseen target-domain data only using labeled source-domain data, where the source and target domains usually share \textit{the same label set}. However, in the context of regression, DG is not well studied in the literature, and the main reason is that ranges of response variable in both domains are often \textit{different}, even disjoint under some extreme conditions. In this paper, we study a new problem setting: \textit{domain generalization in regression} (DGR), and propose a weighted meta-learning strategy to get optimal meta-initialization across disjoint domains to help address the DGR problem. The motivation is that when the meta-model performs well on one domain, we hope such a model also performs well in other related domains. To measure the relatedness regarding domains in the context of regression, we use the feature discrepancy in meta-space to calculate the discrepancy between any two domains and treat the discrepancy as the weight of a meta-training task in the meta-learning framework. The extensive regression experiments on standard domain generalization benchmark demonstrate the superiority of the proposed method.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
6 Replies

Loading