Abstract: In our tutorial, we will share more than six years of our crowdsourced data labeling experience and bridge the gap between crowdsourcing and information retrieval communities by showing how one can incorporate human-in-the-loop into their retrieval system to gather the real human feedback on the model predictions. Most of the tutorial time is devoted to a hands-on practice, when the attendees will, under our guidance, implement an end-to-end process for information retrieval from problem statement and data labeling to machine learning model training and evaluation.
0 Replies
Loading