Localization, Convexity, and Star AggregationDownload PDF

21 May 2021, 20:43 (edited 26 Oct 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Statistical Learning Theory, Improper Learning, Empirical Risk Minimization, Fast Rates
  • TL;DR: We develop a powerful analytic tool for proving fast rates that covers both convex ERM and improper learning under general entropy conditions.
  • Abstract: Offset Rademacher complexities have been shown to provide tight upper bounds for the square loss in a broad class of problems including improper statistical learning and online learning. We show that the offset complexity can be generalized to any loss that satisfies a certain general convexity condition. Further, we show that this condition is closely related to both exponential concavity and self-concordance, unifying apparently disparate results. By a novel geometric argument, many of our bounds translate to improper learning in a non-convex class with Audibert's star algorithm. Thus, the offset complexity provides a versatile analytic tool that covers both convex empirical risk minimization and improper learning under entropy conditions. Applying the method, we recover the optimal rates for proper and improper learning with the $p$-loss for $1 < p < \infty$, and show that improper variants of empirical risk minimization can attain fast rates for logistic regression and other generalized linear models.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
11 Replies