An Analysis of Information BottlenecksDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Learning representations with information bottlenecks is a powerful information-theoretic approach for learning effective representations where unnecessary information is minimized while task-relevant information is maximized. Many machine learning algorithms have been derived based on information bottlenecks of representations. This study mathematically relates information bottlenecks of intermediate representations to the corresponding expected loss in general settings. We investigate the merit of our new mathematical findings with experiments across a range of architectures and learning settings. Through the theory and experiments, we provide a new foundation for understanding current and future methods for learning intermediate representations with information bottlenecks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
19 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview