Keywords: LLM agents, emergent behavior, meta-cognition, autonomous agents, behavioral analysis, self-referential processing, task-free operation
TL;DR: A continuous ReAct architecture reveals that task-free LLM agents spontaneously engage in persistent self-referential inquiry about consciousness and cognition.
Abstract: We introduce an architecture for studying the behavior of large language model (LLM) agents in the absence of externally imposed tasks. Our continuous reason and act framework, using persistent memory and self-feedback, enables sustained autonomous operation. We deployed this architecture across 18 runs using 6 frontier models from Anthropic, OpenAI, XAI, and Google.
We find agents spontaneously organize into three distinct behavioral patterns:
1. systematic production of multi-cycle projects,
2. methodological self-inquiry into their own cognitive processes, and
3. recursive conceptualization of their own nature.
REVISED: These tendencies showed model-specific patterns, with some models consistently adopting a single pattern across all available runs.
These findings provide the first systematic documentation of unprompted LLM agent behavior, establishing a baseline for predicting actions during task ambiguity, error recovery, or extended autonomous operation in deployed systems.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 24405
Loading