Budgeted Task Scheduling for Crowdsourced Knowledge Acquisition

Published: 01 Jan 2017, Last Modified: 13 Nov 2024CIKM 2017EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Knowledge acquisition (e.g. through labeling) is one of the most successful applications in crowdsourcing. In practice, collecting as specific as possible knowledge via crowdsourcing is very useful since specific knowledge can be generalized easily if we have a knowledge base, but it is difficult to infer specific knowledge from general knowledge. Meanwhile, tasks for acquiring more specific knowledge can be more difficult for workers, thus need more answers to infer high-quality results. Given a limited budget, assigning workers to difficult tasks will be more effective for the goal of specific knowledge acquisition. However, existing crowdsourcing task scheduling cannot incorporate the specificity of workers' answers. In this paper, we present a new framework for task scheduling with the limited budget, targeting an effective solution to more specific knowledge acquisition. We propose novel criteria for evaluating the quality of specificity-dependent answers and result inference algorithms to aggregate more specific answers with budget constraints. We have implemented our framework with real crowdsourcing data and platform, and have achieved significant performance improvement compared with existing approaches.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview