In Pursuit of Regulatable LLMs

Published: 27 Oct 2023, Last Modified: 12 Dec 2023RegML 2023EveryoneRevisionsBibTeX
Keywords: Regulatable ML, LLM, Case-Base Reasoning, Concept Explanation, Counterfactual Explanation, Causality
TL;DR: We propose a general framework for regulatable LLMs
Abstract: Large-Language-Models (LLMs) are arguable the biggest breakthrough in artificial intelligence to date. Recently, they have come to the public Zeitgeist with a surge of media attention surrounding ChatGPT, a large generative language model released by OpenAI which quickly became the fastest growing application in history. This model achieved unparalleled human-AI conversational skills, and even passed various mutations of the popular Turing test which measures if AI systems have achieved general intelligence. Naturally, the world at large wants to utilize these systems for various applications, but in order to do-so in truly sensitive domains, the models must often be regulatable in order to be legally used. In this short paper, we propose one approach towards such systems by forcing them to reason using a combination of (1) human-defined concepts, (2) Case-Base Reasoning (CBR), and (3) counterfactual explanations. All of these have support in user testing and psychology that they are understandable and useful to practitioners of AI systems. We envision this approach will be able to provide transparent LLMs for text classification tasks and be fully regulatable and auditable.
Submission Number: 17
Loading