Unsupervised Learning under Latent Label ShiftDownload PDF

Published: 21 Jul 2022, Last Modified: 05 May 2023SCIS 2022 PosterReaders: Everyone
Keywords: unsupervised learning, label shift, topic modeling, domain adaptation, mixture proportion estimation, unsupervised structure discovery, anchor word, deep learning
TL;DR: Leveraging a domain discriminator for unsupervised classification and estimation of domain-specific class mixtures, under label shift.
Abstract: What sorts of structure might enable a learner to discover classes from unlabeled data? Traditional unsupervised learning approaches risk recovering incorrect classes based on spurious feature-space similarity. In this paper, we introduce unsupervised learning under Latent Label Shift (LLS), where label marginals $p_d(y)$ shift but class conditionals $p(\mathbf{x}|y)$ do not. This setting suggests a new principle for identifying classes: elements that shift together across domains belong to the same true class. For finite input spaces, we establish an isomorphism between LLS and topic modeling; for continuous data, we show that if each label's support contains a separable region, analogous to an anchor word, oracle access to $p(d|\mathbf{x})$ suffices to identify $p_d(y)$ and $p_d(y|\mathbf{x})$ up to permutation of latent labels. Thus motivated, we introduce a practical algorithm that leverages domain-discriminative models as follows: (i) push examples through domain discriminator $p(d|\mathbf{x})$; (ii) discretize the data by clustering examples in $p(d|\mathbf{x})$ space; (iii) perform non-negative matrix factorization on the discrete data; (iv) combine recovered $p(y|d)$ with discriminator outputs $p(d|\mathbf{x})$ to compute $p_d(y|\mathbf{x}) \; \forall d$. In semi-synthetic experiments, we show that our algorithm can use domain information to overcome a failure mode of standard unsupervised classification in which feature-space similarity does not indicate true groupings.
Confirmation: Yes
0 Replies