TL;DR: We study the conditions for when LLMs can successfully replicate the fractal structure of language and relate this to the quality of output. We also release a dataset.
Abstract: Language exhibits a fractal structure in its information-theoretic complexity (i.e. bits per token), with self-similarity across scales and long-range dependence (LRD). In this work, we investigate whether large language models (LLMs) can replicate such fractal characteristics and identify conditions-such as temperature setting and prompting method-under which they may fail. Moreover, we find that the fractal parameters observed in natural language are contained within a narrow range, whereas those of LLMs' output vary widely, suggesting that fractal parameters might prove helpful in detecting a non-trivial portion of LLM-generated texts. Notably, these findings, and many others reported in this work, are robust to the choice of the architecture; e.g. Gemini 1.0 Pro, Mistral-7B and Gemma-2B. We also release a dataset comprising over 240,000 articles generated by various LLMs (both pretrained and instruction-tuned) with different decoding temperatures and prompting methods, along with their corresponding human-generated texts. We hope that this work highlights the complex interplay between fractal properties, prompting, and statistical mimicry in LLMs, offering insights for generating, evaluating and detecting synthetic texts.
Lay Summary: We show that fractal analysis offers a novel and insightful lens for understanding the capabilities and limitations of LLMs in replicating the complex statistical structures of natural language. As we show in the paper, various strategies, like the decoding temperature and prompting method, can impact fractal parameters even when log-perplexity scores seem to be unaffected. This goal is in line with earlier works, who argued that the evaluation of LLMs should go beyond log-perplexity and also consider how well LLMs capture other statistical tendencies observed in natural language. Our key contribution lies in introducing and validating this fractal analysis framework, and reporting several novel results, such as the strong correlation between the Hurst exponent and the quality of texts. In addition, we release a benchmark dataset called GAGLE, comprising over 240,000 articles generated by various LLMs. Unlike other public datasets, GAGLE includes various prompting strategies and decoding temperatures.
Link To Code: https://huggingface.co/datasets/ibomohsin/gagle/tree/main
Primary Area: Deep Learning->Large Language Models
Keywords: large language models, fractals, Hurst exponent, Holder exponent, self-similarity, detection, decoding, instruction-tuning, prompting
Submission Number: 1535
Loading