Reliability Testing for Natural Language Processing SystemsDownload PDF

03 Feb 2021 (modified: 31 May 2021)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Keywords: natural, language, processing, adversarial, attack, testing, reliability, robustness, framework, variation
TL;DR: We argue for the need for reliability testing in NLP and propose a framework, DOCTOR, to illustrate how this can be achieved.
Abstract: Questions of fairness, robustness, and transparency are paramount to address before deploying NLP systems. Central to these concerns is the question of reliability: Can NLP systems reliably treat different demographics fairly and function correctly in diverse and noisy environments? To address this, we argue for the need for reliability testing and contextualize it among existing work on improving accountability. We show how adversarial attacks can be reframed for this goal, via a framework for developing reliability tests. We argue that reliability testing --- with an emphasis on interdisciplinary collaboration --- will enable rigorous and targeted testing, and aid in the enactment and enforcement of industry standards.
1 Reply

Loading