Analytically deriving Partial Information Decomposition for affine systems of stable and convolution-closed distributions
Keywords: Partial Information Decomposition, Neuroscience, Multimodal learning, Analytical PID
TL;DR: Analytical calculation of Partial Information Decomposition for various well-known distributions, such as Poisson, Cauchy, Gamma, Exponential and many more.
Abstract: Bivariate partial information decomposition (PID) has emerged as a promising tool for analyzing interactions in complex systems, particularly in neuroscience. PID achieves this by decomposing the information that two sources (e.g., different brain regions) have about a target (e.g., a stimulus) into unique, redundant, and synergistic terms. However, the computation of PID remains a challenging problem, often involving optimization over distributions. While several works have been proposed to compute PID terms numerically, there is a surprising dearth of work on computing PID terms analytically. The only known analytical PID result is for jointly Gaussian distributions. In this work, we present two theoretical advances that enable analytical calculation of the PID terms for numerous well-known distributions, including distributions relevant to neuroscience, such as Poisson, Cauchy, and binomial. Our first result generalizes the analytical Gaussian PID result to the much larger class of stable distributions. We also discover a theoretical link between PID and the emerging fields of data thinning and data fission. Our second result utilizes this link to derive analytical PID terms for two more classes of distributions: convolution-closed distributions and a sub-class of the exponential family. Furthermore, we provide an analytical upper bound for approximately calculating PID for convolution-closed distributions, whose tightness we demonstrate in simulation.
Supplementary Material: zip
Primary Area: Neuroscience and cognitive science (neural coding, brain-computer interfaces)
Submission Number: 10428
Loading