Abstract: In this paper, we provide three applications for <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$ {\mathsf {f}}$ </tex-math></inline-formula> -divergences: (i) we introduce Sanov’s upper bound on the tail probability of the sum of independent random variables based on super-modular <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$ {\mathsf {f}}$ </tex-math></inline-formula> -divergence and show that our generalized Sanov’s bound strictly improves over ordinary one, (ii) we consider the lossy compression problem which studies the set of achievable rates for a given distortion and code length. We extend the rate-distortion function using mutual <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$ {\mathsf {f}}$ </tex-math></inline-formula> -information and provide new and strictly better bounds on achievable rates in the finite blocklength regime using super-modular <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$ {\mathsf {f}}$ </tex-math></inline-formula> -divergences, and (iii) we provide a connection between the generalization error of algorithms with bounded input/output mutual <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$ {\mathsf {f}}$ </tex-math></inline-formula> -information and a generalized rate-distortion problem. This connection allows us to bound the generalization error of learning algorithms using lower bounds on the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$ {\mathsf {f}}$ </tex-math></inline-formula> -rate -distortion function. Our bound is based on a new lower bound on the rate-distortion function that (for some examples) strictly improves over previously best-known bounds.
0 Replies
Loading