Abstract: Regulation is increasingly cited as the most important and pressing concern in machine learning.
However, it is currently unknown how to implement this, and perhaps more importantly, how it would effect model performance alongside human collaboration if actually realized.
In this paper, we attempt to answer these questions by building a regulatable large-language model (LLM), and then quantifying how the additional constraints involved affect (1) model performance, alongside (2) human collaboration.
Our empirical results reveal that it is possible to force an LLM to use human-defined features in an transparent way, but a ``regulation performance trade-off'' previously not considered reveals itself in the form of a 7.34% classification performance drop.
Surprisingly however, we show that despite this, such systems actually improve human task performance speed and *appropriate* confidence in a realistic deployment setting compared to no AI assistance, thus paving a way for fair, regulatable AI, which benefits users.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Interpretability
Contribution Types: Model analysis & interpretability
Languages Studied: English,
Submission Number: 1238
Loading