Fine-Tuning Vision-Language Models for Multimodal Polymer Property Prediction

Published: 20 Sept 2025, Last Modified: 05 Nov 2025AI4Mat-NeurIPS-2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Vision-Language Model, Polymer, Multimodal Data, Property Prediction
TL;DR: This paper introduces a multimodal polymer dataset and demonstrates that fine-tuned vision-language models significantly improve polymer property prediction compared to text-only LLM and ML baseline methods.
Abstract: Vision-Language Models (VLMs) have shown strong performance in tasks like visual question answering and multimodal text generation, but their effectiveness in scientific domains such as materials science remains limited. While some machine learning methods have addressed specific challenges in this field, there is still a lack of foundation models designed for broad tasks like polymer property prediction using multimodal data. In this work, we present a multimodal polymer dataset to fine-tune VLMs through instruction-tuning pairs and assess the impact of multimodality on prediction performance. Our fine-tuned models, using LoRA, outperform unimodal and baseline approaches, demonstrating the benefits of multimodal learning. Additionally, this approach reduces the need to train separate models for different properties, lowering deployment and maintenance costs.
Submission Track: Paper Track (Short Paper)
Submission Category: AI-Guided Design
Institution Location: Fayetteville, Arkansas, USA
AI4Mat RLSF: Yes
Submission Number: 71
Loading