SMiR: A Synthetic Data Pipeline To Improve Multi-Image Reasoning

28 Sept 2024 (modified: 27 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Synthetic Multimodal Data, Multi-Image Reasoning
TL;DR: synthetic data pipeline generating multi-image instruction tuning data for Vision-Language Models
Abstract: Vision-Language Models (VLMs) have demonstrated strong performance in single-image understanding, supported by many high-quality instruction datasets. However, multi-image reasoning tasks remain under-explored in the open-source community due to two major issues: (1) scaling up datasets with multiple correlated images and complex reasoning instructions is resource-intensive and difficult to maintain quality and (2) there is a shortage of robust multi-image evaluation benchmarks. To address these issues, we introduce SMiR, an efficient synthetic data-generation pipeline for multi-image reasoning, and a high-quality SMiR dataset generated using this pipeline. Our pipeline efficiently extracts highly correlated images using multimodal embeddings, combining visual and descriptive information and leverages open-source LLMs to generate quality instructions, offering a cost-effective alternative to expensive closed-source solutions. Additionally, we present SMiR-Bench, a novel multi-image reasoning evaluation benchmark comprising 100 diverse examples across 7 complex multi-image reasoning tasks. Unlike existing benchmarks, SMiR-Bench is multi-turn and allows for free-form responses, providing a more comprehensive evaluation of model expressiveness and reasoning capability. We demonstrate the effectiveness of SMiR dataset by fine-tuning several open-source VLMs and evaluating their performance on SMiR-Bench. Our results show that models trained on our dataset outperform baseline models in multi-image reasoning tasks. Furthermore, we observe enhanced model expressiveness and more nuanced reasoning in free-form responses, highlighting the value of our approach for advancing open-source VLM research.
Primary Area: datasets and benchmarks
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13519
Loading