OML: A Primitive for Reconciling Open Access with Owner Control in AI Model Distribution

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI Governance, Anti-monopolization, AI Deployment, Data Poisoning, Model Fingerprinting, Trusted Execution Environments, Obfuscation
TL;DR: We introduce, rigorously formulate, and depict roadmap to Open-access, Monetizable, and Loyal (OML) primitive: a foundational shift to securely distribute AI models by synthesizing transparency with granular monetization and critical safety controls.
Abstract: The current paradigm of AI model distribution presents a fundamental dichotomy: models are either closed and API-gated, sacrificing transparency and local execution, or openly distributed, sacrificing monetization and control. We introduce OML(Open-access, Monetizable, and Loyal AI Model Serving), a primitive that enables a new distribution paradigm where models can be freely distributed for local execution while maintaining cryptographically enforced usage authorization. We are the first to introduce and formalize this problem, introducing rigorous security definitions tailored to the unique challenge of white-box model protection: model extraction resistance and permission forgery resistance. We prove fundamental bounds on the achievability of OML properties and characterize the complete design space of potential constructions, from obfuscation-based approaches to cryptographic solutions. To demonstrate practical feasibility, we present OML 1.0, a novel OML construction leveraging AI-native model fingerprinting coupled with crypto-economic enforcement mechanisms. Through extensive theoretical analysis and empirical evaluation, we establish OML as a foundational primitive necessary for sustainable AI ecosystems. This work opens a new research direction at the intersection of cryptography, machine learning, and mechanism design, with critical implications for the future of AI distribution and governance.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 9792
Loading