Unmixing Mean Embeddings for Domain Adaptation with Target Label Proportion
TL;DR: we propose a method for domain adaptation from label proportion in target domain by unmixing mean embedding of each bag
Abstract: We introduce a novel approach to domain adaptation within the context of Learning from Label Proportions (LLP). We address the
challenging scenario where labeled samples are available in the source domain, but only bags of unlabeled samples with their corresponding label proportions are accessible in the target domain.
Our proposed method, bagMME (Bag Matching Mean Embeddings), tackles the distributional shift between domains by focusing on matching class-conditional distributions. A key contribution of bagMME is a simple yet effective unmixing strategy that
leverages the target label proportions to estimate the target class-conditional mean embeddings. These estimated target means are then aligned with their corresponding source class-conditional means, thereby reducing the domain discrepancy. We theoretically demonstrate the soundness of our approach and its effectiveness in mitigating distributional shifts. Extensive experiments on various computer vision datasets showcase the superior performance of bagMME compared to state-of-the-art baselines. Our results highlight the critical role of incorporating target label proportions into the learning process for improved generalization on the target domain.
Submission Number: 1412
Loading