On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Resources and Evaluation
Keywords: toxicity, black-box API, HELM, evaluation, Real Toxicity Prompts, benchmarking
TL;DR: We caution against relying on outdated or inherited automatic toxicity scores to compare models and techniques, highlighting the need for a careful evaluation approach due to the evolving nature of toxicity perception and detection APIs.
Abstract: Perception of toxicity evolves over time and often differs between geographies and cultural backgrounds. Similarly, black-box commercially available APIs for detecting toxicity, such as the Perspective API, are not static, but frequently retrained to address any unattended weaknesses and biases. We evaluate the implications of these changes on the reproducibility of findings that compare the relative merits of models and methods that aim to curb toxicity. Our findings suggest that research that relied on inherited automatic toxicity scores to compare models and techniques may have resulted in inaccurate findings. Rescoring all models from HELM, a widely respected living benchmark, for toxicity with the recent version of the API led to a different ranking of widely used foundation models. We suggest caution in applying apples-to-apples comparisons between studies and call for a more structured approach to evaluating toxicity over time.
Submission Number: 2118
Loading