Deep Regression Representation Learning with Topology

19 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: regression, topology, representation learning, information theory, depth estimation, super-resolution, age estimation
TL;DR: we establish connections between the topology of the feature space and the information bottleneck principle.
Abstract: The information bottleneck (IB) principle is an important framework that provides guiding principles for representation learning. Most works on representation learning and the IB principle focus only on classification and neglect regression. Yet the two operate on different principles to align with the IB principle: classification targets class separation in feature space, while regression requires feature continuity and ordinality with respect to the target. This key difference results in topologically different feature spaces. Why does the IB principle impact the topology of feature space? In this work, we establish two connections between them for regression representation learning. The first connection reveals that a lower intrinsic dimension of the feature space implies a reduced complexity of the representation $Z$, which serves as a learning target of the IB principle. This complexity can be quantified as the entropy of $Z$ conditional on the target space $Y$, and it is shown to be an upper bound on the generalization error. The second connection suggests that to better align with the IB principle, it's beneficial to learn a feature space that is topologically similar to the target space. Motivated by the two connections, we introduce a regularizer named PH-Reg, to lower the intrinsic dimension of feature space and keep the topology of the target space for regression. Experiments on synthetic and real-world regression tasks demonstrate the benefits of PH-Reg.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1763
Loading