Addressing Bias in Active Learning with Depth Uncertainty Networks... or NotDownload PDF

Published: 09 Dec 2021, Last Modified: 05 May 2023ICBINB@NeurIPS2021 ContributedTalkReaders: Everyone
Keywords: Active learning, bias correction, depth uncertainty networks, LURE, DUNs, BNNs
TL;DR: We investigate active learning bias correction in the context of depth uncertainty networks and arrive at surprising negative results.
Abstract: Farquhar et al. [2021] show that correcting for active learning bias with underparameterised models leads to improved downstream performance. For overparameterised models such as NNs, however, correction leads either to decreased or unchanged performance. They suggest that this is due to an “overfitting bias” which offsets the active learning bias. We show that depth uncertainty networks operate in a low overfitting regime, much like underparameterised models. They should therefore see an increase in performance with bias correction. Surprisingly, they do not. We propose that this negative result, as well as the results Farquhar et al. [2021], can be explained via the lens of the bias-variance decomposition of generalisation error.
Category: Negative result: I would like to share my insights and negative results on this topic with the community
1 Reply

Loading