Tight bounds on ℓ1 approximation and learning of self-bounding functionsOpen Website

2020 (modified: 25 Apr 2023)Theor. Comput. Sci. 2020Readers: Everyone
Abstract: We study the complexity of learning and approximation of self-bounding functions over the uniform distribution on the Boolean hypercube { 0 , 1 } n . Informally, a function f : { 0 , 1 } n → R is self-bounding if for every x ∈ { 0 , 1 } n , f ( x ) upper bounds the sum of all the n marginal decreases in the value of the function at x . Self-bounding functions include such well-known classes of functions as submodular and fractionally-subadditive (XOS) functions. They were introduced by Boucheron et al. (2010) in the context of concentration of measure inequalities. Our main result is a nearly tight ℓ 1 -approximation of self-bounding functions by low-degree juntas. Specifically, all self-bounding functions can be ϵ -approximated in ℓ 1 by a polynomial of degree O ˜ ( 1 / ϵ ) over 2 O ˜ ( 1 / ϵ ) variables. We show that both the degree and junta-size are optimal up to logarithmic terms. Previous techniques considered stronger ℓ 2 approximation and proved nearly tight bounds of Θ ( 1 / ϵ 2 ) on the degree and 2 Θ ( 1 / ϵ 2 ) on the number of variables. Our bounds rely on the analysis of noise stability of self-bounding functions together with a stronger connection between noise stability and ℓ 1 approximation by low-degree polynomials. This technique can also be used to get tighter bounds on ℓ 1 approximation by low-degree polynomials and a faster learning algorithm for halfspaces. These results lead to improved and in several cases almost tight bounds for PAC and agnostic learning of self-bounding functions relative to the uniform distribution. In particular, assuming hardness of learning juntas, we show that PAC and agnostic learning of self-bounding functions have complexity of n Θ ˜ ( 1 / ϵ ) .
0 Replies

Loading