Unifying lower bounds on prediction dimension of convex surrogatesDownload PDF

Published: 09 Nov 2021, Last Modified: 05 May 2023NeurIPS 2021 PosterReaders: Everyone
Keywords: Property elicitation, surrogate loss functions, consistency, prediction dimension
TL;DR: We present a framework for deriving lower bounds on prediction dimension that strengthens and broadens previous bounds in the literature.
Abstract: The convex consistency dimension of a supervised learning task is the lowest prediction dimension $d$ such that there exists a convex surrogate $L : \mathbb{R}^d \times \mathcal Y \to \mathbb R$ that is consistent for the given task. We present a new tool based on property elicitation, $d$-flats, for lower-bounding convex consistency dimension. This tool unifies approaches from a variety of domains, including continuous and discrete prediction problems. We use $d$-flats to obtain a new lower bound on the convex consistency dimension of risk measures, resolving an open question due to Frongillo and Kash (NeurIPS 2015). In discrete prediction settings, we show that the $d$-flats approach recovers and even tightens previous lower bounds using feasible subspace dimension.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Supplementary Material: pdf
12 Replies

Loading