The Reasonable Effectiveness of Diverse Evaluation DataDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 15 Jun 2023CoRR 2023Readers: Everyone
Abstract: In this paper, we present findings from an semi-experimental exploration of rater diversity and its influence on safety annotations of conversations generated by humans talking to a generative AI-chat bot. We find significant differences in judgments produced by raters from different geographic regions and annotation platforms, and correlate these perspectives with demographic sub-groups. Our work helps define best practices in model development -- specifically human evaluation of generative models -- on the backdrop of growing work on sociotechnical AI evaluations.
0 Replies

Loading