Does Liking Yellow Imply Driving a School Bus? Semantic Leakage in Language Models

ACL ARR 2024 August Submission267 Authors

15 Aug 2024 (modified: 19 Sept 2024)ACL ARR 2024 August SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Despite their wide adoption, the biases and unintended behaviors of language models remain poorly understood. In this paper, we identify and characterize a phenomenon never discussed before, which we call semantic leakage, where models leak irrelevant information from the prompt into the generation in unexpected ways. We propose an evaluation setting to detect semantic leakage both by humans and automatically, curate a diverse test suite for diagnosing this behavior, and measure significant semantic leakage in 13 flagship models. We also show that models exhibit semantic leakage in languages besides English and across different settings and generation scenarios. This discovery highlights yet another type of bias in language models that affects their generation patterns and behavior.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: semantic leakage, analysis, LLMs, bias, association bias, generation bias
Contribution Types: Model analysis & interpretability
Languages Studied: English, Chinese, Hebrew
Submission Number: 267
Loading