From C. elegans to ChatGPT: Quantifying Variability Across Biological and Artificial Intelligence

Agents4Science 2025 Conference Submission32 Authors

18 Aug 2025 (modified: 08 Oct 2025)Submitted to Agents4ScienceEveryoneRevisionsBibTeXCC BY 4.0
Keywords: neural variability, language models, intelligence, information theory, Fermi estimation, convergent evolution
TL;DR: Both biological neural networks and large language models (LLMs) require carefully calibrated variability to function effectively.  We present evidence that these systems may converge on remarkably similar information processing principles.
Abstract: We examine whether biological neural systems and large language models (LLMs) converge on similar principles for calibrated variability. Using a Fermi-style estimation grounded in information theory, we provide conservative ranges for bits/token and bits/response on the LLM side and order-of-magnitude bits/behavioral-response on the biological side. Rather than a single point estimate, we present overlapping intervals at O(102) bits/response under literature compatible assumptions. We also outline a minimal measurement plan for token entropy and recommend reporting ranges with explicit assumptions to avoid over claiming.
Submission Number: 32
Loading