A Bounding Box is Worth One Token: Interleaving Layout and Text in a Large Language Model for Document Understanding
Keywords: LLM, DocAI, Visually Rich Document Understanding, KIE
Abstract: Recently, many studies have demonstrated that exclusively incorporating OCR-derived text and spatial layouts with large language models (LLMs) can be highly effective for document understanding tasks. However, existing methods that integrate spatial layouts with text have limitations, such as producing overly long text sequences or failing to fully leverage the autoregressive traits of LLMs. In this work, we introduce nterleaving Layout and Text in a Large Language Model (LayTextLLM)} for document understanding. In particular, LayTextLLM projects each bounding box to a single embedding and interleaves it with text, efficiently avoiding long sequence issues while leveraging autoregressive traits of LLMs. LayTextLLM not only streamlines the interaction of layout and textual data but also shows enhanced performance in Key Information Extraction (KIE) and Visual Question Answering (VQA). Comprehensive benchmark evaluations reveal significant improvements, with a 27.0\% increase on KIE tasks and 24.1\% on VQA tasks compared to previous state-of-the-art document understanding MLLMs, as well as a 15.5\% improvement over other SOTA OCR-based LLMs on KIE tasks.
Primary Area: Natural language processing
Submission Number: 5861
Loading