Efficient Multi-View Learning Based on the Information Bottleneck ParadigmDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 12 May 2023IEEECONF 2022Readers: Everyone
Abstract: In this work, we study the information bottleneck (IB) approach in the multi-view learning context. The exponentially growing complexity of the optimal representation motivates the development of two novel formulations with more favorable performance-complexity tradeoffs. The first approach is based on forming a stochastic consensus and is suited for scenarios with significant representation overlap between the different views. The second method, relying on incremental updates, is tailored for the other extreme scenario with minimal representation overlap. In both cases, we extend our earlier work on the alternating directional methods of multiplier (ADMM) solver and establish its convergence and scalability. Empirically, we find that the proposed methods outperform state-of-the-art approaches in multi-view classification problems and vector Gaussian sources under a broad range of modeling parameters.
0 Replies

Loading