The Expressiveness Power of LLM-Based Agent under the Constrain of Finite Context Length

ACL ARR 2026 January Submission7782 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Expressiveness, Agent, Turing complete
Abstract: Large Language Model (LLM)-based agents extend standard Transformers and Chain-of-Thought (CoT) by incorporating tools, memory, and interaction mechanisms, enabling dynamic reasoning and adaptive decision making. Despite their empirical success, their theoretical understanding remains lacking. In this work, we provide a formal characterization of the expressiveness of LLM-based agents. We prove that a finite capacity LLM-based agent with finite context length can achieve Turing completeness, allowing it to simulate arbitrary computational processes using bounded resources. Building on this result, we also analyze the trade-off between context length and environment interaction frequency, showing that extending context length can reduce costly environmental access. Our findings offer new theoretical insights into the expressiveness power and efficiency advantages of LLM-based agents in complex environments.
Paper Type: Long
Research Area: Special Theme (conference specific)
Research Area Keywords: Explainability of NLP Models
Contribution Types: Model analysis & interpretability, Theory
Languages Studied: English
Submission Number: 7782
Loading