Abstract: Despite impressive performance in many benchmark datasets, AI
models can still make mistakes, especially among out-of-distribution
examples. It remains an open question how such imperfect models
can be used efectively in collaboration with humans. Prior work
has focused on AI assistance that helps people make individual
high-stakes decisions, which is not scalable for a large amount of
relatively low-stakes decisions, e.g., moderating social media comments. Instead, we propose conditional delegation as an alternative
paradigm for human-AI collaboration where humans create rules
to indicate trustworthy regions of a model. Using content moderation as a testbed, we develop novel interfaces to assist humans
in creating conditional delegation rules and conduct a randomized
experiment with two datasets to simulate in-distribution and outof-distribution scenarios. Our study demonstrates the promise of
conditional delegation in improving model performance and provides insights into design for this novel paradigm, including the
efect of AI explanations.
0 Replies
Loading