Abstract: Visually Rich Documents (VRDs), encompassing elements like charts, tables, and references, convey complex information across various fields. However, extracting information from these documents is labour-intensive, especially given their inconsistent formats and domain-specific requirements. While pretrained models for VRD Understanding have progressed, their reliance on large, annotated datasets limits scalability. This paper introduces the Domain Adaptive Visually-rich Document Understanding (DAViD) framework, which utilises machine-generated synthetic data for domain adaptation. DAViD integrates fine-grained and coarse-grained document representation learning and employs synthetic annotations to reduce the need for costly manual labelling. By leveraging pretrained models and synthetic data, DAViD achieves competitive performance with minimal annotated datasets. Extensive experiments validate DAViD’s effectiveness, demonstrating its ability to efficiently adapt to domain-specific VRDU tasks.
Paper Type: Long
Research Area: Information Extraction
Research Area Keywords: Visually Rich Document, Synthetic Data, Key Information Extraction, Multimodal
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 1410
Loading