Revisiting Word Embeddings in the LLM Era

ACL ARR 2025 July Submission1096 Authors

29 Jul 2025 (modified: 28 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) have recently shown remarkable advancement in various NLP tasks. As such, a popular trend has emerged lately where NLP researchers extract word/sentence/document embeddings from these large decoder-only models and use them for various inference tasks with promising results. However, it is still unclear whether the performance improvement of LLM-induced embeddings is merely because of scale or whether underlying embeddings they produce significantly differ from classical encoding models like Word2Vec, GloVe, Sentence-BERT (SBERT) or Universal Sentence Encoder (USE). This is the central question we investigate in the paper by systematically comparing classical decontextualized and contextualized word embeddings with the same for LLM-induced embeddings. Our results show that LLMs cluster semantically related words more tightly and perform better on analogy tasks in decontextualized settings. However, in contextualized settings, classical models like SimCSE often outperform LLMs in sentence-level similarity assessment tasks, highlighting their continued relevance for fine-grained semantics.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: Word Embedding, Sentence Embeddings, LLMs, Evaluation, Robustness, Interpretability
Contribution Types: Model analysis & interpretability
Languages Studied: Enlgish
Previous URL: https://openreview.net/forum?id=LdrbxDKEqS
Explanation Of Revisions PDF: pdf
Reassignment Request Area Chair: No, I want the same area chair from our previous submission (subject to their availability).
Reassignment Request Reviewers: No, I want the same set of reviewers from our previous submission (subject to their availability)
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: Yes
A2 Elaboration: section 3
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: section 3
B2 Discuss The License For Artifacts: N/A
B2 Elaboration: No
B3 Artifact Use Consistent With Intended Use: Yes
B3 Elaboration: section 3
B4 Data Contains Personally Identifying Info Or Offensive Content: N/A
B4 Elaboration: No
B5 Documentation Of Artifacts: Yes
B5 Elaboration: section 3
B6 Statistics For Data: Yes
B6 Elaboration: section 3,4
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: section 3
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: section 3
C3 Descriptive Statistics: Yes
C3 Elaboration: section 3,4,appendix
C4 Parameters For Packages: N/A
C4 Elaboration: No
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D1 Elaboration: No
D2 Recruitment And Payment: N/A
D2 Elaboration: No
D3 Data Consent: N/A
D3 Elaboration: No
D4 Ethics Review Board Approval: N/A
D4 Elaboration: No
D5 Characteristics Of Annotators: N/A
D5 Elaboration: No
E Ai Assistants In Research Or Writing: Yes
E1 Information About Use Of Ai Assistants: No
E1 Elaboration: We used AI for refining and restructuring the text.
Author Submission Checklist: yes
Submission Number: 1096
Loading