OmniLayout: Enabling Coarse-to-Fine Learning with LLMs for Universal Document Layout Generation

ICLR 2026 Conference Submission17811 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Document AI, Document Layout Generation, Large Language Models
TL;DR: With OmniLayout-1M and LLM-based coarse-to-fine learning, we enable universal and diverse document layout generation.
Abstract: Document AI has advanced rapidly and is attracting increasing attention. Yet, while most efforts have focused on document layout analysis (DLA), its generative counterpart, document layout generation, remains underexplored. A major obstacle lies in the scarcity of diverse layouts: academic papers with Manhattan-style structures dominate existing studies, while open-world genres such as newspapers and magazines remain severely underrepresented. To address this gap, we curate **OmniLayout-1M**, the first million-scale dataset of diverse document layouts, covering six common document types and comprising contemporary layouts collected from multiple sources. Moreover, since existing methods struggle in complex domains and often fail to arrange long sequences coherently, we introduce **OmniLayout-LLM**, a 0.5B model with a designed two-stage *Coarse-to-Fine learning paradigm*: 1) learning universal layout principles from OmniLayout-1M with coarse category definitions, and 2) transferring the knowledge to a specific domain with fine-grained annotations. Extensive experiments demonstrate that our approach achieves strong performance on multiple domains in M^6^Doc dataset, substantially surpassing both existing layout generation experts and several latest general-purpose LLMs. Our code, models, and dataset will be publicly released.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 17811
Loading