Abstract: The information bottleneck (IB) method is a technique designed to extract meaningful information related to one random variable from another random variable, and has found extensive applications in machine learning problems. In this paper, neural network based estimation of the IB problem solution is studied, through the lens of a novel formulation of the IB problem. Via exploiting the inherent structure of the IB functional and leveraging the mapping approach, the proposed formulation of the IB problem involves only a single variable to be optimized, and subsequently is readily amenable to data-driven estimators based on neural networks. A theoretical analysis is conducted to guarantee that the neural estimator asymptotically solves the IB problem, and the numerical experiments on both synthetic and MNIST datasets demonstrate the effectiveness of the neural estimator.
Loading