Exploring Complex Trade-offs in Information Bottleneck through Multi-Objective Optimization

ICLR 2025 Conference Submission13656 Authors

28 Sept 2024 (modified: 13 Oct 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Information Bottleneck ,Multi-objective Optimization,Mutual Information
Abstract: Information Bottleneck (IB) theory provides a principled approach to analyze and optimize how neural networks extract and learn latent representations from data, aiming to enhance network performance and generalization. The IB framework has been applied and validated across various domains in deep learning. However, most studies employing IB require tuning of Lagrange multipliers to balance compression and prediction during optimization. Finding the optimal Lagrange multiplier $\beta$ to achieve the best balance between compression and prediction is challenging, relying heavily on empirical tuning and potentially failing to capture the complex trade-offs present within the IB paradigm. In this paper, we redefine the IB problem as a multi-objective optimization problem with respect to compression and prediction objectives. We employ a gradient-based multi-objective optimization algorithm that adaptively determines the weights for this optimization challenge. Our method is demonstrated to automatically find Pareto-optimal solutions, achieving a balance between compression and prediction, and exploring more complex Pareto frontiers than linear weighting. We compare our approach with the Variational Information Bottleneck and its variants across different datasets. Empirical results confirm that our method achieves a more stable and optimal trade-off compared to Information Bottleneck approaches with manually-tuned multipliers. The code is available in \url{https://anonymous.4open.science/r/ASDGASDG}.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13656
Loading