De-biasing Weakly Supervised Learning by Regularizing Prediction EntropyDownload PDF

Mar 20, 2019 (edited Jul 02, 2019)ICLR 2019 Workshop LLD Blind SubmissionReaders: Everyone
  • Abstract: We explore the effect of regularizing prediction entropy in a weakly supervised setting with inexact class labels. When underlying data distributions are biased toward a specific subclass, we hypothesize that entropy regularization can be used to bootstrap a training set that mitigates this bias. We conduct experiments over multiple datasets under supervision of an oracle and in a semi-supervised setting finding substantial reductions in training set bias capable of decreasing test error rate. These findings suggest entropy regularization as a promising approach to de-biasing weakly supervised learning systems.
3 Replies

Loading