Near-optimal learning of tree-structured distributions by Chow-Liu

Published: 2021, Last Modified: 22 Jul 2025STOC 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We provide finite sample guarantees for the classical Chow-Liu algorithm (IEEE Trans. Inform. Theory, 1968) to learn a tree-structured graphical model of a distribution. For a distribution P on Σn and a tree T on n nodes, we say T is an ε-approximate tree for P if there is a T-structured distribution Q such that D(P || Q) is at most ε more than the best possible tree-structured distribution for P. We show that if P itself is tree-structured, then the Chow-Liu algorithm with the plug-in estimator for mutual information with O(|Σ|3 nε−1) i.i.d. samples outputs an ε-approximate tree for P with constant probability. In contrast, for a general P (which may not be tree-structured), Ω(n2ε−2) samples are necessary to find an ε-approximate tree. Our upper bound is based on a new conditional independence tester that addresses an open problem posed by Canonne, Diakonikolas, Kane, and Stewart (STOC, 2018): we prove that for three random variables X,Y,Z each over Σ, testing if I(X; Y ∣ Z) is 0 or ≥ ε is possible with O(|Σ|3/ε) samples. Finally, we show that for a specific tree T, with O(|Σ|2nε−1) samples from a distribution P over Σn, one can efficiently learn the closest T-structured distribution in KL divergence by applying the add-1 estimator at each node.
Loading