Transductive Active Learning: Theory and Applications

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: active learning, experimental design, bandits, Bayesian optimization, neural networks, deep learning, fine-tuning, transfer learning, transductive learning, generalization, extrapolation
TL;DR: We develop a theory for automatic data selection when you know what you want to learn. We show that knowing what you want a model to learn can be leveraged to learn much more efficiently than just trying to learn "everything".
Abstract: We study a generalization of classical active learning to real-world settings with concrete prediction targets where sampling is restricted to an accessible region of the domain, while prediction targets may lie outside this region. We analyze a family of decision rules that sample adaptively to minimize uncertainty about prediction targets. We are the first to show, under general regularity assumptions, that such decision rules converge uniformly to the smallest possible uncertainty obtainable from the accessible data. We demonstrate their strong sample efficiency in two key applications: active fine-tuning of large neural networks and safe Bayesian optimization, where they achieve state-of-the-art performance.
Supplementary Material: zip
Primary Area: Active learning
Submission Number: 2135
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview