Unifying lower bounds on prediction dimension of convex surrogatesDownload PDF

21 May 2021, 20:47 (modified: 26 Oct 2021, 19:40)NeurIPS 2021 PosterReaders: Everyone
Keywords: Property elicitation, surrogate loss functions, consistency, prediction dimension
TL;DR: We present a framework for deriving lower bounds on prediction dimension that strengthens and broadens previous bounds in the literature.
Abstract: The convex consistency dimension of a supervised learning task is the lowest prediction dimension $d$ such that there exists a convex surrogate $L : \mathbb{R}^d \times \mathcal Y \to \mathbb R$ that is consistent for the given task. We present a new tool based on property elicitation, $d$-flats, for lower-bounding convex consistency dimension. This tool unifies approaches from a variety of domains, including continuous and discrete prediction problems. We use $d$-flats to obtain a new lower bound on the convex consistency dimension of risk measures, resolving an open question due to Frongillo and Kash (NeurIPS 2015). In discrete prediction settings, we show that the $d$-flats approach recovers and even tightens previous lower bounds using feasible subspace dimension.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
12 Replies