Abstract: Crowdsourcing has achieved great success in fields like data annotation, social survey, objects labeling, etc. However, enticed by potential high rewards, we have seen more and more malicious behavior like plagiarism, random submission, offline collusion, etc. The existence of such malicious behavior not only increases the cost of handling tasks for requesters but also leads to low quality of collected data. Current research works only investigate some specific types of malicious behavior and focus more on their impact on aggregation results. Also, they don’t evaluate the effectiveness of these malicious behavior in different scenarios. In this study, we formally propose malicious behavior effectiveness analysis model that could be applied in different scenarios. Through comprehensive experiments on four typical malicious behavior, we demonstrate that with the increasing of the number of malicious workers, all these malicious behavior lead to the decrease in accuracy of aggregation algorithms, among which random submission causes the biggest declination. Our study could provide guidance for designing secure crowdsourcing platforms, as well as ensuring high quality data.
Loading