Functional Linear Regression of Cumulative Distribution Functions

Published: 05 Mar 2024, Last Modified: 05 Mar 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: The estimation of cumulative distribution functions (CDF) is an important learning task with a great variety of downstream applications, such as risk assessments in predictions and decision making. In this paper, we study functional regression of contextual CDF{}s where each data point is sampled from a linear combination of context dependent CDF basis functions. We propose functional ridge-regression-based estimation methods that estimate CDF{}s accurately everywhere. In particular, given $n$ samples with $d$ basis functions, we show estimation error upper bounds of $\widetilde O(\sqrt{d/n})$ for fixed design, random design, and adversarial context cases. We also derive matching information theoretic lower bounds, establishing minimax optimality for CDF functional regression. Furthermore, we remove the burn-in time in the random design setting using an alternative penalized estimator. Then, we consider agnostic settings where there is a mismatch in the data generation process. We characterize the error of the proposed estimators in terms of the mismatched error, and show that the estimators are well-behaved under model mismatch. Moreover, to complete our study, we formalize infinite dimensional models where the parameter space is an infinite dimensional Hilbert space, and establish a self-normalized estimation error upper bound for this setting. Notably, the upper bound reduces to the $\widetilde O(\sqrt{d/n})$ bound when the parameter space is constrained to be $d$-dimensional. Our comprehensive numerical experiments validate the efficacy of our estimation methods in both synthetic and practical settings.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: In this work, we provide the theory of CDF regression. This work studies fixed design, random design, and adversarial cases. This work provides fixed design, random design, and self-normalized upper bound for these settings. This work provides a matching lower bound for CDF regression. This work provides KS concentration bounds as well as risk assessment concentrations. This work provides estimators with and without burn-in time. This work extends CDF regression to infinite-dimensional parameter spaces. Part of our study on the setting of the self-normalized upper bound when parameter space is not infinite dimensional space has some similarities to prior work. A work by Zhou et al., thankfully suggested by wsMf, studies linear mixture MDP, which resembles this section of our work. To learn finite dimensional parameters, Zhou et al. assume PDF (or PMF) bases and work on much more general response spaces, and the provided bound depends on the reward magnitude scale. In contrast, our work does not make any assumption on the existence of PDFs (crucial in practice), works only on scaler response, and the bounds do not depend on the reward magnitude scale (important in practice). To this end, our paper utilizes the unique properties of CDF to derive the final CDF regression bounds. Thanks to the action editor's suggestion, we incorporated this discussion into the main text. Thanks, Authors Zhou et al.: Dongruo Zhou, Quanquan Gu, and Csaba Szepesvari. "Nearly minimax optimal reinforcement learning for linear mixture Markov decision processes." Conference on Learning Theory. PMLR, 2021.
Supplementary Material: pdf
Assigned Action Editor: ~Yu_Bai1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1731
Loading