Convergence Behavior of an Adversarial Weak Supervision Method

Published: 26 Apr 2024, Last Modified: 15 Jul 2024UAI 2024 spotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: weak supervision, adversarial weak supervision, balsubramani_freund, dawid_skene, convergence, consistency
TL;DR: Statistical analysis and experiments for an adversarial weak supervision method.
Abstract: Labeling data via rules-of-thumb and minimal label supervision is central to Weak Supervision, a paradigm subsuming subareas of machine learning such as crowdsourced learning and semi-supervised ensemble learning. By using this labeled data to train modern machine learning methods, the cost of acquiring large amounts of hand labeled data can be ameliorated. Approaches to combining the rules-of-thumb falls into two camps, reflecting different ideologies of statistical estimation. The most common approach, exemplified by the Dawid-Skene model, is based on probabilistic modeling. The other, developed in the work of Balsubramani-Freund and others, is adversarial and game-theoretic. We provide a variety of statistical results for the adversarial approach under log-loss: we characterize the form of the solution, relate it to logistic regression, demonstrate consistency, and give rates of convergence. On the other hand, we find that probabilistic approaches for the same model class can fail to be consistent. Experimental results are provided to corroborate the theoretical results.
Supplementary Material: zip
List Of Authors: An, Steven and Dasgupta, Sanjoy
Latex Source Code: zip
Signed License Agreement: pdf
Code Url: https://github.com/stevenan5/balsubramani-freund-uai-2024
Submission Number: 360
Loading