MMSci: A Multimodal Multi-Discipline Dataset for PhD-Level Scientific Comprehension

Published: 08 Jul 2024, Last Modified: 23 Jul 2024AI4Mat-Vienna-2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Submission Track: Full Paper
Submission Category: Automated Chemical Synthesis + Automated Material Characterization
Keywords: Scientific AI Assistant, Multi-discipline, Multimodal, Figure understanding, Scientific knowledge understanding, benchmark, Nature science, Materials science, Materials generation, LVLM, Visual instruction-following
Supplementary Material: zip
TL;DR: We created a multi-discipline, multimodal dataset from Nature Communications journal articles, building a benchmark for evaluating and enhancing multimodal models' comprehension of scientific content and figures.
Abstract: The rapid advancement of Large Language Models (LLMs) and Large Multimodal Models (LMMs) has heightened the demand for AI-based scientific assistants capable of understanding scientific articles and figures. Despite progress, there remains a significant gap in evaluating models' comprehension of professional, graduate-level, and even PhD-level scientific content. Current datasets and benchmarks primarily focus on relatively simple scientific tasks and figures, lacking comprehensive assessments across diverse advanced scientific disciplines. To bridge this gap, we collected a multimodal, multidisciplinary dataset from open-access scientific articles published in Nature Communications journals. This dataset spans 72 scientific disciplines, ensuring both diversity and quality. We created benchmarks with various tasks and settings to comprehensively evaluate LMMs' capabilities in understanding scientific figures and content. Our evaluation revealed that these tasks are highly challenging: many open-source models struggled significantly, and even GPT-4V and GPT-4o faced difficulties. We also explored using our dataset as training resources by constructing visual instruction-following data, enabling the 7B LLaVA model to achieve performance comparable to GPT-4V/o on our benchmark. Additionally, we investigated the use of our interleaved article texts and figure images for pre-training LMMs, resulting in improvements on the material generation task. The source dataset, including articles, figures, constructed benchmarks, and visual instruction-following data, is open-sourced.
Submission Number: 8
Loading