Abstract: Despite the widespread use of Transformer-based text embedding models in NLP tasks, surprising ``sticky tokens'' can undermine the reliability of embeddings.
These tokens, when repeatedly inserted into sentences, pull sentence similarity toward a certain value, disrupting the normal distribution of embedding distances and degrading downstream performance.
In this paper, we systematically investigate such anomalous tokens, formally defining them and introducing an efficient detection method, Sticky Token Detector (STD), based on sentence and token filtering.
Applying STD to 37 checkpoints across 12 model families, we discover a total of 770 sticky tokens.
Our analysis reveals that these tokens often originate from special or unused entries in the vocabulary, as well as fragmented subwords from multilingual corpora. Notably, their presence does not strictly correlate with model size or vocabulary size.
We further evaluate how sticky tokens affect downstream tasks like clustering and retrieval, observing significant performance drops of up to 50\%.
Through attention-layer analysis, we show that sticky tokens disproportionately dominate the model’s internal representations, raising concerns about tokenization robustness.
Our findings show the need for better tokenization strategies and model design to mitigate the impact of sticky tokens in future text embedding applications.
Paper Type: Long
Research Area: Semantics: Lexical and Sentence-Level
Research Area Keywords: semantic textual similarity,phrase/sentence embedding
Contribution Types: Model analysis & interpretability, Data analysis
Languages Studied: English
Submission Number: 7921
Loading