Analyzing Large Language Model Behavior via Embedding Analysis

Published: 11 Dec 2024, Last Modified: 09 May 2025Applied Human Factors and Ergonomics: International Conference on Human Factors in Design, Engineering, and Computing (AHFE Hawaii Edition)EveryoneCC BY 4.0
Abstract: The usage of large language models (LLMs) as a generative artificial intelligence tool is becoming increasingly widespread, yet there is limited understanding of the mechanisms by which prompts in whole or in part influence their behavior, capabilities, and limitations. In this paper, the authors conduct a mathematical and topological analysis of token embeddings – the first step in the computational workflow of LLMs. This work shows that the subspace where token embeddings lie is a stratified manifold with varying local dimension, and in those cases where semantically related tokens are co-located on a submanifold, there are non-trivial implications for model behavior. These topological and geometric findings help to explain performance aspects of different LLMs such as why the Llemma model is more likely to overfit than the GPT-2 model, yet the latter does worse at mathematical queries than the former. To the best of the authors’ knowledge, this paper is among the first to conduct such research into the topological characterization of the token embedding space and analyze LLM behavior starting from first principles.
Loading