Supervised Disentanglement Under Hidden Correlations

14 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Disentangled representation learning, Correlations, Mutual information, Causal graph
Abstract: Disentangled representation learning (DRL) methods are often leveraged to improve the generalization of representations. Recent DRL methods have tried to handle attribute correlations by enforcing conditional independence based on attributes. However, the complex multi-modal data distributions and hidden correlations under attributes remain unexplored. Existing methods are theoretically shown to cause the loss of mode information under such hidden correlations. To solve this problem, we propose Supervised Disentanglement under Hidden Correlations (SD-HC), which discovers data modes under certain attributes and minimizes mode-based conditional mutual information to achieve disentanglement. Theoretically, we prove that SD-HC is sufficient for disentanglement under hidden correlations, preserving mode information and attribute information. Empirically, extensive experiments on toy data and six real-world datasets demonstrate improved generalization against the state-of-the-art baselines. Codes are available at anonymous Github https://anonymous.4open.science/r/SD-HC-1FAD.
Primary Area: learning theory
Submission Number: 4971
Loading