Assisting Human Decisions in Document Matching

Published: 25 Jul 2023, Last Modified: 25 Jul 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Many practical applications, ranging from paper-reviewer assignment in peer review to job-applicant matching for hiring, require human decision makers to identify relevant matches by combining their expertise with predictions from machine learning models. In many such model-assisted document matching tasks, the decision makers have stressed the need for assistive information about the model outputs (or the data) to facilitate their decisions. In this paper, we devise a proxy matching task that allows us to evaluate which kinds of assistive information improve decision makers’ performance (in terms of accuracy and time). Through a crowdsourced (N = 271 participants) study, we find that providing black-box model explanations reduces users’ accuracy on the matching task, contrary to the commonly-held belief that they can be helpful by allowing better understanding of the model. On the other hand, custom methods that are designed to closely attend to some task-specific desiderata are found to be effective in improving user performance. Surprisingly, we also find that the users’ perceived utility of assistive information is misaligned with their objective utility (measured through their task performance).
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: - Added links to the code and demo. - Added acknowledgements.
Code: https://github.com/wnstlr/document-matching
Supplementary Material: zip
Assigned Action Editor: ~Alessandro_Sperduti1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 869
Loading