Position: We Can’t Understand AI Using our Existing Vocabulary

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 Position Paper Track spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: To understand AI, we must develop neologisms: new words for human and machine concepts.
Abstract: This position paper argues that, in order to understand AI, we cannot rely on our existing vocabulary of human words. Instead, we should strive to develop neologisms: new words that represent precise human concepts that we want to teach machines, or machine concepts that we need to learn. We start from the premise that humans and machines have differing concepts. This means interpretability can be framed as a communication problem: humans must be able to reference and control machine concepts, and communicate human concepts to machines. Creating a shared human-machine language through developing neologisms, we believe, could solve this communication problem. Successful neologisms achieve a useful amount of abstraction: not too detailed, so they’re reusable in many contexts, and not too high-level, so they convey precise information. As a proof of concept, we demonstrate how a “length neologism” enables controlling LLM response length, while a “diversity neologism” allows sampling more variable responses. Taken together, we argue that we cannot understand AI using our existing vocabulary, and expanding it through neologisms creates opportunities for both controlling and understanding machines better.
Lay Summary: Understanding AI systems is a critical problem as they become increasingly deployed in the world. We frame the problem of understanding and controlling AI systems as a communication problem, like how we try to communicate complex concepts between humans. In this opinion piece, we argue that we need to develop new words for ways in which AI systems see the world, and new words that teach AI systems how we see the world, in order to achieve this communication. Akin to how humans invent new words to discuss new or complex things, we need to do this with AI systems. We provide arguments for this position and early experiments showcasing how we might accomplish this in the future.
Primary Area: Model Understanding, Explainability, Interpretability, and Trust
Keywords: Interpretability, understanding, control
Submission Number: 346
Loading