ALMol: Aligned Language-Molecule Translation LLMs through Offline Preference Contrastive Optimisation

Published: 06 Jul 2024, Last Modified: 28 Jul 2024Language and Molecules ACL 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: machine language-molecule translation, cross-modal alignment tuning, fine-grained, domain-agnostic evaluation
Abstract: The field of chemistry and Artificial Intelligence (AI) intersection is an area of active research that aims to accelerate scientific discovery. The integration of large language models (LLMs) with scientific modalities has shown significant promise in this endeavour. However, challenges persist in effectively addressing training efficacy and the out-of-distribution problem, particularly as existing approaches rely on larger models and datasets. In this context, we focus on machine language-molecule translation and deploy a novel training approach called contrastive preference optimisation, which avoids generating translations that are merely adequate but not perfect. To ensure generalisability and mitigate memorisation effects, we conduct experiments using only 10% of the data. Our results demonstrate that our models achieve up to a 32\% improvement compared to counterpart models. Finally, we introduce a fine-grained, domain-agnostic evaluation method to assess hallucination in LLMs and promote responsible use.
Archival Option: The authors of this submission want it to appear in the archival proceedings.
Submission Number: 2
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview