RoboCrowd: Scaling Robot Data Collection through Crowdsourcing

Published: 29 Oct 2024, Last Modified: 03 Nov 2024CoRL 2024 Workshop MRM-D OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: crowdsourcing, data collection, imitation learning
TL;DR: We propose RoboCrowd, a framework for crowdsourcing robot demonstrations in a public environment via incentive design.
Abstract: Imitation learning from large-scale human demonstrations has emerged as a promising paradigm for training robot policies. However, collecting demonstrations is burdensome for expert operators. We introduce a new data collection paradigm, RoboCrowd, which distributes the workload by utilizing crowdsourcing principles and incentive design. We build RoboCrowd on top of ALOHA (Zhao et al. 2023)—a bimanual platform that supports data collection via puppeteering—to explore the design space for crowdsourcing in-person demonstrations in a public environment. We propose three classes of incentive mechanisms to appeal to users' varying sources of motivation for interacting with the system: material rewards, intrinsic interest, and social comparison. We instantiate these incentives through tasks that include physical rewards, engaging or challenging manipulations, as well as gamification elements such as a leaderboard. We conduct a large-scale, two-week field experiment in which the platform is situated in a university cafe. Over 200 individuals independently volunteered to provide a total of over 800 interaction episodes. Our findings validate the proposed incentives as mechanisms for shaping users' data quantity and quality. Further, we demonstrate that the crowdsourced data can serve as useful pre-training data for policies fine-tuned on expert demonstrations—boosting performance up to 20\% compared to when this data is not available. These results suggest the potential for RoboCrowd to reduce the burden of robot data collection by carefully implementing crowdsourcing and incentive design principles.
Submission Number: 44
Loading