How Do Coding Agents Spend Your Money? Analyzing and Predicting Token Consumptions in Agentic Coding Tasks
Keywords: AI Agent, Coding Agent
Abstract: AI agents offer substantial opportunities to increase human productivity across many tasks. However, their use in complex workflows also drives rapid growth in LLM token consumption. When agents are deployed on tasks that can require millions of tokens, two questions naturally arise: how do agents consume LLM tokens, and can we predict token usage before task execution? In this paper, we use the OpenHands coding agent as a case study and present the first empirical analysis of agent token consumption patterns using agent trajectories. We further explore the possibility of predicting token costs before task execution. We find that (1) Agent token consumption has inherent randomness even when executing the same tasks; (2) Unlike chat and reasoning tasks, input tokens dominate overall consumption and cost, even with token caching; (3) Predicting output-token amounts and the range of total consumption can achieve weak-to-moderate correlation, offering limited but nontrivial predictive signal. Surprisingly, we also find that (4) Higher token usage does not lead to higher accuracy, as tasks and runs that cost more tokens are usually associated with lower accuracy. We believe our study provides new and useful insights that can impact decisions around token consumptions for the growing number of AI agentic tasks.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: AI/LLM Agents, Code Models, Human-AI Interaction/Cooperation and Human-Centric NLP
Languages Studied: English, Programming Languages
Submission Number: 10518
Loading