Vision-Language Instruction Tuning: A Review and Analysis

TMLR Paper1890 Authors

03 Dec 2023 (modified: 03 Apr 2024)Under review for TMLREveryoneRevisionsBibTeX
Abstract: Instruction tuning is a crucial supervised training phase in Large Language Models (LLMs), aiming to enhance the LLM's ability to generalize instruction execution and adapt to user preferences. With the increasing integration of multi-modal data into LLMs, there is growing interest in Vision-Language Instruction Tuning (VLIT), which presents more complex characteristics compared to pure text instruction tuning. In this paper, we systematically review the latest VLIT settings and corresponding datasets in multi-modal LLMs and provide insights into the intrinsic motivations behind their design. For the first time, we offer a detailed multi-perspective categorization for existing VLIT datasets and identify the characteristics that high-quality VLIT data should possess. By incorporating these characteristics as guiding principles into the existing VLIT data construction process, we conduct extensive experiments and verify their positive impact on the performance of tuned multi-modal LLMs. Furthermore, we discuss the current challenges and future research directions of VLIT, providing insights for the continuous development of this field. The code and dataset related to this paper have been open-sourced at URL\footnote{Anonymous during the review stage.}.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Lei_Li11
Submission Number: 1890
Loading