To Achieve Truly Generalist Models, We Need to Incentivize Collaboration Through Fair Revenue Sharing

Published: 23 Sept 2025, Last Modified: 18 Nov 2025ACA-NeurIPS2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Revenue Sharing, Mechanism Design, Large Language Models
Abstract: Large language models (LLMs) are still developed and served as isolated, single-provider systems. While each excels on a set of benchmarks, real-world applications demand competence across many tasks and domains. In principle, an aggregate model that combines the strengths of multiple specialized checkpoints would Pareto-dominate today’s monoliths---matching or exceeding every individual model on every objective. Realizing such a frontier, however, is impossible without collaboration among the diverse actors who control data, weights, compute, and user distribution. Collaboration raises a thorny question of who gets paid: each stakeholder contributes their distinct resources and will cooperate only if the additional revenue is shared in a way they perceive as fair. We argue that constructing truly generalist LLMs therefore hinges on mechanism design---specifically, revenue-sharing rules that are transparent, incentive-compatible, and robust to externalities. Drawing on cooperative game theory, we outline how Shapley-inspired allocations solution concepts can distribute the surplus revenue from such collaborations fairly. By embedding such mechanisms into model-hosting platforms and API brokers, the LLM community can move from siloed competition to productive cooperation, accelerating progress toward universally capable, socially beneficial language technologies.
Submission Number: 43
Loading