LMaaS: Exploring Pricing Strategy of Large Model as a Service for Communication

Published: 01 Jan 2024, Last Modified: 27 Jun 2025IEEE Trans. Mob. Comput. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: One of the most important features of next-generation communication is to incorporate intelligence towards semantic communication, where highly condensed semantic information considering both source and channel features will be extracted and transmitted. The recent popular large models such as GPT4 and the boosting learning techniques are envisioned to accelerate its practical implementation in the near future. Given the characteristics of “training once and widely use” of those multimodal large language models, we argue that a pay-as-you-go service mode will be suitable in this context, referred to as Large Model as a Service (LMaaS). However, the trading and pricing problem is quite complex with heterogeneous and dynamic customer environments, making the pricing optimization problem challenging in seeking on-hand solutions. In this paper, we optimize the profit of both the seller and customers. We formulate the LMaaS market trading as a Stackelberg game with two steps. In the first step, we optimize the seller's pricing decision and propose an Iterative Model Pricing (IMP) algorithm that optimizes the prices of large models iteratively by reasoning customers’ future rental decisions, which achieves a near-optimal pricing solution. In the second step, we optimize customers’ selection decisions by designing a robust selecting and renting (RSR) algorithm, which is guaranteed to be optimal with rigorous theoretical proof. Extensive experiments confirm the effectiveness and robustness of our algorithms, outperforming the state-of-the-art solution by 43.96% in profit at the customer side and achieving the near-optimal profit at the seller side.
Loading