Keywords: Transfer Learning, Adaptation, Quantile Regression, High-dimensional Statistics, Convergence Rate
TL;DR: The paper addresses the quantile regression problem by leveraging existing pretrained models. We develop an quantile regression adapter via transfer learning and provide statistical guarantees regarding on its adaptation efficiency.
Abstract: Adapter-tuning strategy is an efficient method in machine learning that introduces lightweight and sparse trainable parameters into a pretrained model without altering the original parameters (e.g., low-rank adaptation of large language models). Nevertheless, most existing adapter-tuning approaches are developed for risk-neutral task objectives and the study on the adaptation of risk-sensitive tasks is limited. In this paper, we propose a transfer learning-based quantile regression adapter to improve the estimation of quantile-related risks by leveraging existing pretrained models. We also establish a theoretical analysis to quantify the efficacy of our quantile regression adapter. Particularly, we introduce a transferability measure that characterizes the intrinsic similarity between the pretrained model and downstream task in order to explain when transferring knowledge can improve downstream learning. Under appropriate transferability and structural assumptions, we establish error bounds for the estimation and out-of-sample prediction quality by our quantile regression adapter. Compared to vanilla approaches without transfer learning, our method is provably more sample efficient. Extensive numerical simulations are conducted to demonstrate the superiority and robustness of our method empirically.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9962
Loading