Keywords: weak supervision, data programming, data labeling, active learning
Abstract: Obtaining large annotated datasets is critical for training successful machine learning models and it is often a bottleneck in practice. Weak supervision offers a promising alternative for producing labeled datasets without ground truth annotations by generating probabilistic labels using multiple noisy heuristics. This process can scale to large datasets and has demonstrated state of the art performance in diverse domains such as healthcare and e-commerce. One practical issue with learning from user-generated heuristics is that their creation requires creativity, foresight, and domain expertise from those who hand-craft them, a process which can be tedious and subjective. We develop the first framework for interactive weak supervision in which a method proposes heuristics and learns from user feedback given on each proposed heuristic. Our experiments demonstrate that only a small number of feedback iterations are needed to train models that achieve highly competitive test set performance without access to ground truth training labels. We conduct user studies, which show that users are able to effectively provide feedback on heuristics and that test set results track the performance of simulated oracles.
One-sentence Summary: We introduce a framework and method for training classifiers on datasets without ground truth annotation by interacting with domain experts to discover good weak supervision sources.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![github](/images/github_icon.svg) benbo/interactive-weak-supervision](https://github.com/benbo/interactive-weak-supervision)
Data: [Amazon Product Data](https://paperswithcode.com/dataset/amazon-product-data), [BiasBios](https://paperswithcode.com/dataset/biasbios), [COCO](https://paperswithcode.com/dataset/coco), [IMDb Movie Reviews](https://paperswithcode.com/dataset/imdb-movie-reviews)