everyone
since 19 Mar 2025">EveryoneRevisionsBibTeXCC BY 4.0
Current strategies for semi-supervised Bayesian active learning are generally based on learning unsupervised representations and then performing active learning on the resulting latent space with a supervised model. We find that this approach breaks down with messy, uncurated pools as the representations fail to capture the right similarities between our inputs. To address this, we propose the use of task-driven representations that are periodically updated during the active learning process. Our approach leads to more effective acquisitions and enhances model performance.