Archiving Submission: No (non-archival)
Keywords: LLMs, tokenization, mechanism design, strategic behavior
Abstract: State-of-the-art large language models require specialized hardware and substantial energy to operate.
As a consequence, cloud-based services that provide access to large language models have become very popular.
In these services, the price users pay for an output provided by a model depends on the number of tokens the model uses to generate it—they pay a fixed price per token.
In this work, we show that this pricing mechanism creates a financial incentive for providers to strategize and misreport the (number of) tokens a model used to generate an output, and users cannot prove, or even know, whether a provider is overcharging them.
However, we also show that, if an unfaithful provider is obliged to be transparent about the generative process used by the model, misreporting optimally without raising suspicion is hard.
Nevertheless, as a proof-of-concept, we introduce an efficient heuristic algorithm that allows providers to significantly overcharge users without raising suspicion, highlighting the vulnerability of users under the current pay-per-token pricing mechanism.
Further, to completely eliminate the financial incentive to strategize, we introduce a simple incentive-compatible token pricing mechanism. Under this mechanism, the price users pay for an output provided by a model depends on the number of characters of the output—they pay a fixed price per character. Along the way, to illustrate and complement our theoretical results, we conduct experiments with several large language models from the `Llama`, `Gemma` and `Ministral` families, and input prompts from the LMSYS Chatbot Arena platform.
Submission Number: 7
Loading